00:00:00.001 Started by upstream project "autotest-nightly" build number 3339 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 2733 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.107 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.108 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.109 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.138 Fetching changes from the remote Git repository 00:00:00.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.155 Using shallow fetch with depth 1 00:00:00.155 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.155 > git --version # timeout=10 00:00:00.173 > git --version # 'git version 2.39.2' 00:00:00.173 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.174 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.174 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.845 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.857 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.867 Checking out Revision 10b73a6b8d61c05f3981f9d6fab712fcdadeb236 (FETCH_HEAD) 00:00:03.868 > git config core.sparsecheckout # timeout=10 00:00:03.879 > git read-tree -mu HEAD # timeout=10 00:00:03.895 > git checkout -f 10b73a6b8d61c05f3981f9d6fab712fcdadeb236 # timeout=5 00:00:03.915 Commit message: "jenkins/check-jenkins-labels: add ExtraStorage label" 00:00:03.915 > git rev-list --no-walk 10b73a6b8d61c05f3981f9d6fab712fcdadeb236 # timeout=10 00:00:04.016 [Pipeline] Start of Pipeline 00:00:04.025 [Pipeline] library 00:00:04.027 Loading library shm_lib@master 00:00:04.027 Library shm_lib@master is cached. Copying from home. 00:00:04.038 [Pipeline] node 00:00:04.050 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.051 [Pipeline] { 00:00:04.058 [Pipeline] catchError 00:00:04.060 [Pipeline] { 00:00:04.068 [Pipeline] wrap 00:00:04.075 [Pipeline] { 00:00:04.080 [Pipeline] stage 00:00:04.082 [Pipeline] { (Prologue) 00:00:04.093 [Pipeline] echo 00:00:04.094 Node: VM-host-SM9 00:00:04.098 [Pipeline] cleanWs 00:00:04.106 [WS-CLEANUP] Deleting project workspace... 00:00:04.106 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.111 [WS-CLEANUP] done 00:00:04.263 [Pipeline] setCustomBuildProperty 00:00:04.306 [Pipeline] nodesByLabel 00:00:04.307 Found a total of 2 nodes with the 'sorcerer' label 00:00:04.314 [Pipeline] httpRequest 00:00:04.317 HttpMethod: GET 00:00:04.318 URL: http://10.211.11.40/jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:04.326 Sending request to url: http://10.211.11.40/jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:04.329 Response Code: HTTP/1.1 200 OK 00:00:04.330 Success: Status code 200 is in the accepted range: 200,404 00:00:04.330 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:05.583 [Pipeline] sh 00:00:05.861 + tar --no-same-owner -xf jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:05.875 [Pipeline] httpRequest 00:00:05.878 HttpMethod: GET 00:00:05.879 URL: http://10.211.11.40/spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:00:05.879 Sending request to url: http://10.211.11.40/spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:00:05.880 Response Code: HTTP/1.1 200 OK 00:00:05.880 Success: Status code 200 is in the accepted range: 200,404 00:00:05.881 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:00:32.459 [Pipeline] sh 00:00:32.739 + tar --no-same-owner -xf spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:00:35.281 [Pipeline] sh 00:00:35.560 + git -C spdk log --oneline -n5 00:00:35.560 aa824ae66 bdevperf: remove max io size limit for verify 00:00:35.560 161ef3f54 scripts/perf: Rename vhost_*master_core to vhost_*main_core 00:00:35.560 8bba6ed63 fuzz/llvm_vfio_fuzz: Adjust array index to avoid overflow 00:00:35.560 387dbedc4 env_dpdk: fix build with OpenSSL < 3.0.0 00:00:35.560 2b5de63c1 include: ensure ENOKEY is defined on FreeBSD 00:00:35.579 [Pipeline] writeFile 00:00:35.594 [Pipeline] sh 00:00:35.874 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:35.886 [Pipeline] sh 00:00:36.169 + cat autorun-spdk.conf 00:00:36.169 RUN_NIGHTLY=1 00:00:36.169 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.169 SPDK_TEST_NVME=1 00:00:36.169 SPDK_TEST_FTL=1 00:00:36.169 SPDK_TEST_ISAL=1 00:00:36.169 SPDK_RUN_ASAN=1 00:00:36.169 SPDK_RUN_UBSAN=1 00:00:36.169 SPDK_TEST_XNVME=1 00:00:36.169 SPDK_TEST_NVME_FDP=1 00:00:36.176 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:36.177 [Pipeline] } 00:00:36.191 [Pipeline] // stage 00:00:36.204 [Pipeline] stage 00:00:36.205 [Pipeline] { (Run VM) 00:00:36.217 [Pipeline] sh 00:00:36.496 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:36.496 + echo 'Start stage prepare_nvme.sh' 00:00:36.496 Start stage prepare_nvme.sh 00:00:36.496 + [[ -n 4 ]] 00:00:36.496 + disk_prefix=ex4 00:00:36.496 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:36.496 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:36.496 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:36.496 ++ RUN_NIGHTLY=1 00:00:36.496 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.496 ++ SPDK_TEST_NVME=1 00:00:36.496 ++ SPDK_TEST_FTL=1 00:00:36.496 ++ SPDK_TEST_ISAL=1 00:00:36.496 ++ SPDK_RUN_ASAN=1 00:00:36.496 ++ SPDK_RUN_UBSAN=1 00:00:36.496 ++ SPDK_TEST_XNVME=1 00:00:36.496 ++ SPDK_TEST_NVME_FDP=1 00:00:36.496 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:36.496 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:36.496 + nvme_files=() 00:00:36.496 + declare -A nvme_files 00:00:36.496 + backend_dir=/var/lib/libvirt/images/backends 00:00:36.496 + nvme_files['nvme.img']=5G 00:00:36.496 + nvme_files['nvme-cmb.img']=5G 00:00:36.496 + nvme_files['nvme-multi0.img']=4G 00:00:36.496 + nvme_files['nvme-multi1.img']=4G 00:00:36.496 + nvme_files['nvme-multi2.img']=4G 00:00:36.496 + nvme_files['nvme-openstack.img']=8G 00:00:36.496 + nvme_files['nvme-zns.img']=5G 00:00:36.496 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:36.496 + (( SPDK_TEST_FTL == 1 )) 00:00:36.496 + nvme_files["nvme-ftl.img"]=6G 00:00:36.496 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:36.496 + nvme_files["nvme-fdp.img"]=1G 00:00:36.496 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:36.496 + for nvme in "${!nvme_files[@]}" 00:00:36.496 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:36.496 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.496 + for nvme in "${!nvme_files[@]}" 00:00:36.496 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:00:36.496 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:36.496 + for nvme in "${!nvme_files[@]}" 00:00:36.496 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:36.496 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:36.496 + for nvme in "${!nvme_files[@]}" 00:00:36.496 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:36.496 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:36.496 + for nvme in "${!nvme_files[@]}" 00:00:36.496 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:36.755 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:36.755 + for nvme in "${!nvme_files[@]}" 00:00:36.755 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:36.755 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.755 + for nvme in "${!nvme_files[@]}" 00:00:36.755 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:36.755 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.755 + for nvme in "${!nvme_files[@]}" 00:00:36.755 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:00:36.755 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:36.755 + for nvme in "${!nvme_files[@]}" 00:00:36.755 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:36.755 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:36.755 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:36.755 + echo 'End stage prepare_nvme.sh' 00:00:36.755 End stage prepare_nvme.sh 00:00:36.766 [Pipeline] sh 00:00:37.086 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:37.086 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:00:37.346 00:00:37.346 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:37.346 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:37.346 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:37.346 HELP=0 00:00:37.346 DRY_RUN=0 00:00:37.346 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:00:37.346 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:37.346 NVME_AUTO_CREATE=0 00:00:37.346 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:00:37.346 NVME_CMB=,,,, 00:00:37.346 NVME_PMR=,,,, 00:00:37.346 NVME_ZNS=,,,, 00:00:37.346 NVME_MS=true,,,, 00:00:37.346 NVME_FDP=,,,on, 00:00:37.346 SPDK_VAGRANT_DISTRO=fedora38 00:00:37.346 SPDK_VAGRANT_VMCPU=10 00:00:37.346 SPDK_VAGRANT_VMRAM=12288 00:00:37.346 SPDK_VAGRANT_PROVIDER=libvirt 00:00:37.346 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:37.346 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:37.346 SPDK_OPENSTACK_NETWORK=0 00:00:37.346 VAGRANT_PACKAGE_BOX=0 00:00:37.346 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:37.346 FORCE_DISTRO=true 00:00:37.346 VAGRANT_BOX_VERSION= 00:00:37.346 EXTRA_VAGRANTFILES= 00:00:37.346 NIC_MODEL=e1000 00:00:37.346 00:00:37.346 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:00:37.346 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:40.632 Bringing machine 'default' up with 'libvirt' provider... 00:00:40.891 ==> default: Creating image (snapshot of base box volume). 00:00:40.891 ==> default: Creating domain with the following settings... 00:00:40.891 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1707937278_df3acd0566abecd821f2 00:00:40.891 ==> default: -- Domain type: kvm 00:00:40.891 ==> default: -- Cpus: 10 00:00:40.891 ==> default: -- Feature: acpi 00:00:40.891 ==> default: -- Feature: apic 00:00:40.891 ==> default: -- Feature: pae 00:00:40.891 ==> default: -- Memory: 12288M 00:00:40.891 ==> default: -- Memory Backing: hugepages: 00:00:40.891 ==> default: -- Management MAC: 00:00:40.891 ==> default: -- Loader: 00:00:40.891 ==> default: -- Nvram: 00:00:40.891 ==> default: -- Base box: spdk/fedora38 00:00:40.891 ==> default: -- Storage pool: default 00:00:40.891 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1707937278_df3acd0566abecd821f2.img (20G) 00:00:40.891 ==> default: -- Volume Cache: default 00:00:40.891 ==> default: -- Kernel: 00:00:40.891 ==> default: -- Initrd: 00:00:40.891 ==> default: -- Graphics Type: vnc 00:00:40.891 ==> default: -- Graphics Port: -1 00:00:40.891 ==> default: -- Graphics IP: 127.0.0.1 00:00:40.891 ==> default: -- Graphics Password: Not defined 00:00:40.891 ==> default: -- Video Type: cirrus 00:00:40.891 ==> default: -- Video VRAM: 9216 00:00:40.891 ==> default: -- Sound Type: 00:00:40.891 ==> default: -- Keymap: en-us 00:00:40.891 ==> default: -- TPM Path: 00:00:40.891 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:40.891 ==> default: -- Command line args: 00:00:40.891 ==> default: -> value=-device, 00:00:40.891 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:40.891 ==> default: -> value=-drive, 00:00:40.891 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:40.891 ==> default: -> value=-device, 00:00:40.891 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:40.891 ==> default: -> value=-device, 00:00:40.891 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:40.891 ==> default: -> value=-drive, 00:00:40.891 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:00:40.891 ==> default: -> value=-device, 00:00:40.891 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.891 ==> default: -> value=-device, 00:00:40.891 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:00:40.891 ==> default: -> value=-drive, 00:00:40.891 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:40.891 ==> default: -> value=-device, 00:00:40.891 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.891 ==> default: -> value=-drive, 00:00:40.891 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:40.891 ==> default: -> value=-device, 00:00:40.891 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.891 ==> default: -> value=-drive, 00:00:40.891 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:40.891 ==> default: -> value=-device, 00:00:40.891 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.891 ==> default: -> value=-device, 00:00:40.891 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:40.891 ==> default: -> value=-device, 00:00:40.891 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:00:40.891 ==> default: -> value=-drive, 00:00:40.891 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:40.891 ==> default: -> value=-device, 00:00:40.891 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:41.151 ==> default: Creating shared folders metadata... 00:00:41.151 ==> default: Starting domain. 00:00:42.525 ==> default: Waiting for domain to get an IP address... 00:01:00.609 ==> default: Waiting for SSH to become available... 00:01:01.542 ==> default: Configuring and enabling network interfaces... 00:01:06.809 default: SSH address: 192.168.121.171:22 00:01:06.809 default: SSH username: vagrant 00:01:06.809 default: SSH auth method: private key 00:01:08.187 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:16.304 ==> default: Mounting SSHFS shared folder... 00:01:17.240 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:17.240 ==> default: Checking Mount.. 00:01:18.619 ==> default: Folder Successfully Mounted! 00:01:18.619 ==> default: Running provisioner: file... 00:01:19.185 default: ~/.gitconfig => .gitconfig 00:01:19.753 00:01:19.753 SUCCESS! 00:01:19.753 00:01:19.753 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:19.753 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:19.753 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:19.753 00:01:19.763 [Pipeline] } 00:01:19.782 [Pipeline] // stage 00:01:19.790 [Pipeline] dir 00:01:19.791 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:19.792 [Pipeline] { 00:01:19.807 [Pipeline] catchError 00:01:19.809 [Pipeline] { 00:01:19.823 [Pipeline] sh 00:01:20.104 + vagrant ssh-config --host vagrant 00:01:20.104 + sed -ne /^Host/,$p 00:01:20.104 + tee ssh_conf 00:01:23.424 Host vagrant 00:01:23.424 HostName 192.168.121.171 00:01:23.424 User vagrant 00:01:23.424 Port 22 00:01:23.424 UserKnownHostsFile /dev/null 00:01:23.424 StrictHostKeyChecking no 00:01:23.424 PasswordAuthentication no 00:01:23.424 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:01:23.424 IdentitiesOnly yes 00:01:23.424 LogLevel FATAL 00:01:23.424 ForwardAgent yes 00:01:23.424 ForwardX11 yes 00:01:23.424 00:01:23.439 [Pipeline] withEnv 00:01:23.441 [Pipeline] { 00:01:23.454 [Pipeline] sh 00:01:23.735 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:23.735 source /etc/os-release 00:01:23.735 [[ -e /image.version ]] && img=$(< /image.version) 00:01:23.735 # Minimal, systemd-like check. 00:01:23.735 if [[ -e /.dockerenv ]]; then 00:01:23.735 # Clear garbage from the node's name: 00:01:23.735 # agt-er_autotest_547-896 -> autotest_547-896 00:01:23.735 agent=${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:23.735 if mountpoint -q /etc/hostname; then 00:01:23.735 # We can assume this is a mount from a host where container is running, 00:01:23.735 # so fetch its hostname to easily identify the target swarm worker. 00:01:23.735 container="$(< /etc/hostname) ($agent)" 00:01:23.735 else 00:01:23.735 # Fallback 00:01:23.735 container=$agent 00:01:23.735 fi 00:01:23.735 fi 00:01:23.735 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:23.735 00:01:24.006 [Pipeline] } 00:01:24.025 [Pipeline] // withEnv 00:01:24.033 [Pipeline] setCustomBuildProperty 00:01:24.047 [Pipeline] stage 00:01:24.049 [Pipeline] { (Tests) 00:01:24.067 [Pipeline] sh 00:01:24.346 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:24.619 [Pipeline] timeout 00:01:24.619 Timeout set to expire in 40 min 00:01:24.621 [Pipeline] { 00:01:24.636 [Pipeline] sh 00:01:24.914 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:25.481 HEAD is now at aa824ae66 bdevperf: remove max io size limit for verify 00:01:25.494 [Pipeline] sh 00:01:25.774 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:26.055 [Pipeline] sh 00:01:26.330 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:26.603 [Pipeline] sh 00:01:26.883 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:26.883 ++ readlink -f spdk_repo 00:01:26.883 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:26.883 + [[ -n /home/vagrant/spdk_repo ]] 00:01:26.883 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:26.883 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:26.883 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:26.883 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:26.883 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:26.883 + cd /home/vagrant/spdk_repo 00:01:26.883 + source /etc/os-release 00:01:26.883 ++ NAME='Fedora Linux' 00:01:26.883 ++ VERSION='38 (Cloud Edition)' 00:01:26.883 ++ ID=fedora 00:01:26.883 ++ VERSION_ID=38 00:01:26.883 ++ VERSION_CODENAME= 00:01:26.883 ++ PLATFORM_ID=platform:f38 00:01:26.883 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:26.883 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.883 ++ LOGO=fedora-logo-icon 00:01:26.883 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:26.883 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.883 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:26.884 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.884 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.884 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.884 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:26.884 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.884 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:26.884 ++ SUPPORT_END=2024-05-14 00:01:26.884 ++ VARIANT='Cloud Edition' 00:01:26.884 ++ VARIANT_ID=cloud 00:01:26.884 + uname -a 00:01:26.884 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:26.884 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:27.142 Hugepages 00:01:27.142 node hugesize free / total 00:01:27.142 node0 1048576kB 0 / 0 00:01:27.142 node0 2048kB 0 / 0 00:01:27.142 00:01:27.142 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.142 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:27.402 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:27.402 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:27.402 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:27.402 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3c3n1 00:01:27.402 + rm -f /tmp/spdk-ld-path 00:01:27.402 + source autorun-spdk.conf 00:01:27.402 ++ RUN_NIGHTLY=1 00:01:27.402 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.402 ++ SPDK_TEST_NVME=1 00:01:27.402 ++ SPDK_TEST_FTL=1 00:01:27.402 ++ SPDK_TEST_ISAL=1 00:01:27.402 ++ SPDK_RUN_ASAN=1 00:01:27.402 ++ SPDK_RUN_UBSAN=1 00:01:27.402 ++ SPDK_TEST_XNVME=1 00:01:27.402 ++ SPDK_TEST_NVME_FDP=1 00:01:27.402 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.402 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.402 + [[ -n '' ]] 00:01:27.402 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:27.402 + for M in /var/spdk/build-*-manifest.txt 00:01:27.402 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.402 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.402 + for M in /var/spdk/build-*-manifest.txt 00:01:27.402 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.402 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.402 ++ uname 00:01:27.402 + [[ Linux == \L\i\n\u\x ]] 00:01:27.402 + sudo dmesg -T 00:01:27.661 + sudo dmesg --clear 00:01:27.661 + dmesg_pid=5166 00:01:27.661 + [[ Fedora Linux == FreeBSD ]] 00:01:27.661 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.661 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.661 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.661 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.661 + sudo dmesg -Tw 00:01:27.661 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.661 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.661 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.661 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.661 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.661 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.661 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.661 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.661 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.661 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.661 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.661 Test configuration: 00:01:27.661 RUN_NIGHTLY=1 00:01:27.661 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.661 SPDK_TEST_NVME=1 00:01:27.661 SPDK_TEST_FTL=1 00:01:27.661 SPDK_TEST_ISAL=1 00:01:27.661 SPDK_RUN_ASAN=1 00:01:27.661 SPDK_RUN_UBSAN=1 00:01:27.661 SPDK_TEST_XNVME=1 00:01:27.661 SPDK_TEST_NVME_FDP=1 00:01:27.661 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 19:02:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:27.661 19:02:04 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.661 19:02:04 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.661 19:02:04 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.661 19:02:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.661 19:02:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.661 19:02:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.661 19:02:04 -- paths/export.sh@5 -- $ export PATH 00:01:27.661 19:02:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.661 19:02:04 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:27.661 19:02:04 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:27.661 19:02:04 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1707937324.XXXXXX 00:01:27.661 19:02:04 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1707937324.knr6A4 00:01:27.661 19:02:04 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:27.661 19:02:04 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:27.661 19:02:04 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:27.662 19:02:04 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:27.662 19:02:04 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.662 19:02:04 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:27.662 19:02:04 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:27.662 19:02:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.662 19:02:04 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:27.662 19:02:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:27.662 19:02:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:27.662 19:02:04 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:27.662 19:02:04 -- spdk/autobuild.sh@16 -- $ date -u 00:01:27.662 Wed Feb 14 07:02:04 PM UTC 2024 00:01:27.662 19:02:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:27.662 v24.05-pre-81-gaa824ae66 00:01:27.662 19:02:04 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:27.662 19:02:04 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:27.662 19:02:04 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:01:27.662 19:02:04 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:01:27.662 19:02:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.662 ************************************ 00:01:27.662 START TEST asan 00:01:27.662 ************************************ 00:01:27.662 using asan 00:01:27.662 19:02:04 -- common/autotest_common.sh@1102 -- $ echo 'using asan' 00:01:27.662 00:01:27.662 real 0m0.000s 00:01:27.662 user 0m0.000s 00:01:27.662 sys 0m0.000s 00:01:27.662 19:02:04 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:27.662 19:02:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.662 ************************************ 00:01:27.662 END TEST asan 00:01:27.662 ************************************ 00:01:27.662 19:02:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:27.662 19:02:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:27.662 19:02:05 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:01:27.662 19:02:05 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:01:27.662 19:02:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.662 ************************************ 00:01:27.662 START TEST ubsan 00:01:27.662 ************************************ 00:01:27.662 using ubsan 00:01:27.662 19:02:05 -- common/autotest_common.sh@1102 -- $ echo 'using ubsan' 00:01:27.662 00:01:27.662 real 0m0.000s 00:01:27.662 user 0m0.000s 00:01:27.662 sys 0m0.000s 00:01:27.662 19:02:05 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:27.662 19:02:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.662 ************************************ 00:01:27.662 END TEST ubsan 00:01:27.662 ************************************ 00:01:27.662 19:02:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:27.662 19:02:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.662 19:02:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.662 19:02:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.662 19:02:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.662 19:02:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.662 19:02:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.662 19:02:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.662 19:02:05 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:27.921 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:27.921 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:28.488 Using 'verbs' RDMA provider 00:01:43.934 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:56.138 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:56.138 Creating mk/config.mk...done. 00:01:56.138 Creating mk/cc.flags.mk...done. 00:01:56.138 Type 'make' to build. 00:01:56.138 19:02:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:56.138 19:02:32 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:01:56.138 19:02:32 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:01:56.138 19:02:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.138 ************************************ 00:01:56.138 START TEST make 00:01:56.138 ************************************ 00:01:56.138 19:02:32 -- common/autotest_common.sh@1102 -- $ make -j10 00:01:56.138 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:56.138 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:56.138 meson setup builddir \ 00:01:56.138 -Dwith-libaio=enabled \ 00:01:56.138 -Dwith-liburing=enabled \ 00:01:56.138 -Dwith-libvfn=disabled \ 00:01:56.138 -Dwith-spdk=false && \ 00:01:56.138 meson compile -C builddir && \ 00:01:56.138 cd -) 00:01:56.138 make[1]: Nothing to be done for 'all'. 00:01:58.670 The Meson build system 00:01:58.670 Version: 1.3.1 00:01:58.670 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:58.670 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:58.670 Build type: native build 00:01:58.670 Project name: xnvme 00:01:58.670 Project version: 0.7.3 00:01:58.670 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:58.670 C linker for the host machine: cc ld.bfd 2.39-16 00:01:58.670 Host machine cpu family: x86_64 00:01:58.670 Host machine cpu: x86_64 00:01:58.670 Message: host_machine.system: linux 00:01:58.670 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:58.670 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:58.670 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:58.670 Run-time dependency threads found: YES 00:01:58.670 Has header "setupapi.h" : NO 00:01:58.670 Has header "linux/blkzoned.h" : YES 00:01:58.670 Has header "linux/blkzoned.h" : YES (cached) 00:01:58.670 Has header "libaio.h" : YES 00:01:58.670 Library aio found: YES 00:01:58.670 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:58.670 Run-time dependency liburing found: YES 2.2 00:01:58.670 Dependency libvfn skipped: feature with-libvfn disabled 00:01:58.670 Run-time dependency appleframeworks found: NO (tried framework) 00:01:58.670 Run-time dependency appleframeworks found: NO (tried framework) 00:01:58.670 Configuring xnvme_config.h using configuration 00:01:58.670 Configuring xnvme.spec using configuration 00:01:58.670 Run-time dependency bash-completion found: YES 2.11 00:01:58.670 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:58.670 Program cp found: YES (/usr/bin/cp) 00:01:58.670 Has header "winsock2.h" : NO 00:01:58.670 Has header "dbghelp.h" : NO 00:01:58.670 Library rpcrt4 found: NO 00:01:58.670 Library rt found: YES 00:01:58.670 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:58.670 Found CMake: /usr/bin/cmake (3.27.7) 00:01:58.670 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:01:58.670 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:01:58.670 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:01:58.670 Build targets in project: 32 00:01:58.670 00:01:58.670 xnvme 0.7.3 00:01:58.670 00:01:58.670 User defined options 00:01:58.670 with-libaio : enabled 00:01:58.670 with-liburing: enabled 00:01:58.670 with-libvfn : disabled 00:01:58.670 with-spdk : false 00:01:58.670 00:01:58.670 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:58.670 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:58.670 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:01:58.670 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:01:58.670 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:01:58.670 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:01:58.671 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:01:58.671 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:01:58.671 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:01:58.929 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:01:58.929 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:01:58.929 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:01:58.929 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:01:58.929 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:01:58.929 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:01:58.929 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:01:58.929 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:01:58.929 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:01:58.929 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:01:58.929 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:01:58.929 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:01:58.929 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:01:58.929 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:01:58.929 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:01:58.929 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:01:58.929 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:01:59.188 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:01:59.188 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:01:59.188 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:01:59.188 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:01:59.188 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:01:59.188 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:01:59.188 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:01:59.188 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:01:59.188 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:01:59.188 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:01:59.188 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:01:59.188 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:01:59.188 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:01:59.188 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:01:59.188 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:01:59.188 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:01:59.188 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:01:59.188 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:01:59.188 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:01:59.188 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:01:59.188 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:01:59.188 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:01:59.188 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:01:59.188 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:01:59.188 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:01:59.188 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:01:59.188 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:01:59.188 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:01:59.188 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:01:59.447 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:01:59.447 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:01:59.447 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:01:59.447 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:01:59.447 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:01:59.447 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:01:59.447 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:01:59.447 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:01:59.447 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:01:59.447 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:01:59.447 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:01:59.447 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:01:59.447 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:01:59.447 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:01:59.447 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:01:59.447 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:01:59.447 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:01:59.706 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:01:59.706 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:01:59.706 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:01:59.706 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:01:59.706 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:01:59.706 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:01:59.706 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:01:59.706 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:01:59.706 [79/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:01:59.706 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:01:59.706 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:01:59.706 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:01:59.706 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:01:59.706 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:01:59.965 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:01:59.965 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:01:59.965 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:01:59.965 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:01:59.965 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:01:59.965 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:01:59.965 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:01:59.965 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:01:59.965 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:01:59.965 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:01:59.965 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:01:59.965 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:01:59.965 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:01:59.965 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:01:59.965 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:01:59.965 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:01:59.965 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:01:59.965 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:01:59.965 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:01:59.965 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:01:59.965 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:01:59.965 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:01:59.965 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:01:59.965 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:01:59.965 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:00.224 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:00.224 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:00.224 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:00.224 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:00.224 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:00.224 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:00.224 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:00.224 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:00.224 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:00.224 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:00.224 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:00.224 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:00.224 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:00.224 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:00.224 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:00.224 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:00.224 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:00.224 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:00.224 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:00.224 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:00.224 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:00.224 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:00.224 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:00.483 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:00.483 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:00.483 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:00.483 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:00.483 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:00.483 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:00.483 [139/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:00.483 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:00.483 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:00.483 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:00.483 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:00.483 [144/203] Linking target lib/libxnvme.so 00:02:00.483 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:00.483 [146/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:00.741 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:00.741 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:00.742 [149/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:00.742 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:00.742 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:00.742 [152/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:00.742 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:00.742 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:00.742 [155/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:00.742 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:00.742 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:00.742 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:00.742 [159/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:00.742 [160/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:01.000 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:01.000 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:01.000 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:01.000 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:01.000 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:01.000 [166/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:01.000 [167/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:01.000 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:01.000 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:01.000 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:01.259 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:01.259 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:01.259 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:01.259 [174/203] Linking static target lib/libxnvme.a 00:02:01.259 [175/203] Linking target tests/xnvme_tests_cli 00:02:01.259 [176/203] Linking target tests/xnvme_tests_async_intf 00:02:01.259 [177/203] Linking target tests/xnvme_tests_buf 00:02:01.259 [178/203] Linking target tests/xnvme_tests_znd_append 00:02:01.259 [179/203] Linking target tests/xnvme_tests_lblk 00:02:01.259 [180/203] Linking target tests/xnvme_tests_enum 00:02:01.259 [181/203] Linking target tests/xnvme_tests_ioworker 00:02:01.259 [182/203] Linking target tests/xnvme_tests_scc 00:02:01.259 [183/203] Linking target tests/xnvme_tests_xnvme_file 00:02:01.259 [184/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:01.259 [185/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:01.259 [186/203] Linking target tests/xnvme_tests_znd_state 00:02:01.259 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:01.259 [188/203] Linking target tests/xnvme_tests_map 00:02:01.259 [189/203] Linking target tests/xnvme_tests_kvs 00:02:01.518 [190/203] Linking target tools/xnvme_file 00:02:01.518 [191/203] Linking target tools/lblk 00:02:01.518 [192/203] Linking target tools/xnvme 00:02:01.518 [193/203] Linking target tools/xdd 00:02:01.518 [194/203] Linking target examples/xnvme_io_async 00:02:01.518 [195/203] Linking target examples/xnvme_enum 00:02:01.518 [196/203] Linking target tools/zoned 00:02:01.518 [197/203] Linking target tools/kvs 00:02:01.518 [198/203] Linking target examples/xnvme_hello 00:02:01.518 [199/203] Linking target examples/xnvme_single_async 00:02:01.518 [200/203] Linking target examples/xnvme_dev 00:02:01.518 [201/203] Linking target examples/xnvme_single_sync 00:02:01.518 [202/203] Linking target examples/zoned_io_async 00:02:01.518 [203/203] Linking target examples/zoned_io_sync 00:02:01.518 INFO: autodetecting backend as ninja 00:02:01.518 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:01.518 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:09.633 The Meson build system 00:02:09.633 Version: 1.3.1 00:02:09.633 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:09.633 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:09.633 Build type: native build 00:02:09.633 Program cat found: YES (/usr/bin/cat) 00:02:09.633 Project name: DPDK 00:02:09.633 Project version: 23.11.0 00:02:09.633 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:09.633 C linker for the host machine: cc ld.bfd 2.39-16 00:02:09.633 Host machine cpu family: x86_64 00:02:09.633 Host machine cpu: x86_64 00:02:09.633 Message: ## Building in Developer Mode ## 00:02:09.633 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:09.633 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:09.633 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:09.633 Program python3 found: YES (/usr/bin/python3) 00:02:09.633 Program cat found: YES (/usr/bin/cat) 00:02:09.633 Compiler for C supports arguments -march=native: YES 00:02:09.633 Checking for size of "void *" : 8 00:02:09.633 Checking for size of "void *" : 8 (cached) 00:02:09.633 Library m found: YES 00:02:09.633 Library numa found: YES 00:02:09.633 Has header "numaif.h" : YES 00:02:09.633 Library fdt found: NO 00:02:09.633 Library execinfo found: NO 00:02:09.633 Has header "execinfo.h" : YES 00:02:09.633 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:09.633 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:09.633 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:09.633 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:09.633 Run-time dependency openssl found: YES 3.0.9 00:02:09.633 Run-time dependency libpcap found: YES 1.10.4 00:02:09.633 Has header "pcap.h" with dependency libpcap: YES 00:02:09.633 Compiler for C supports arguments -Wcast-qual: YES 00:02:09.633 Compiler for C supports arguments -Wdeprecated: YES 00:02:09.633 Compiler for C supports arguments -Wformat: YES 00:02:09.633 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:09.633 Compiler for C supports arguments -Wformat-security: NO 00:02:09.633 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.633 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:09.633 Compiler for C supports arguments -Wnested-externs: YES 00:02:09.633 Compiler for C supports arguments -Wold-style-definition: YES 00:02:09.633 Compiler for C supports arguments -Wpointer-arith: YES 00:02:09.633 Compiler for C supports arguments -Wsign-compare: YES 00:02:09.633 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:09.633 Compiler for C supports arguments -Wundef: YES 00:02:09.633 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.633 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:09.633 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:09.633 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.633 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:09.633 Program objdump found: YES (/usr/bin/objdump) 00:02:09.633 Compiler for C supports arguments -mavx512f: YES 00:02:09.633 Checking if "AVX512 checking" compiles: YES 00:02:09.633 Fetching value of define "__SSE4_2__" : 1 00:02:09.633 Fetching value of define "__AES__" : 1 00:02:09.633 Fetching value of define "__AVX__" : 1 00:02:09.633 Fetching value of define "__AVX2__" : 1 00:02:09.633 Fetching value of define "__AVX512BW__" : (undefined) 00:02:09.633 Fetching value of define "__AVX512CD__" : (undefined) 00:02:09.633 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:09.633 Fetching value of define "__AVX512F__" : (undefined) 00:02:09.633 Fetching value of define "__AVX512VL__" : (undefined) 00:02:09.633 Fetching value of define "__PCLMUL__" : 1 00:02:09.633 Fetching value of define "__RDRND__" : 1 00:02:09.633 Fetching value of define "__RDSEED__" : 1 00:02:09.633 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:09.633 Fetching value of define "__znver1__" : (undefined) 00:02:09.633 Fetching value of define "__znver2__" : (undefined) 00:02:09.633 Fetching value of define "__znver3__" : (undefined) 00:02:09.633 Fetching value of define "__znver4__" : (undefined) 00:02:09.633 Library asan found: YES 00:02:09.633 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:09.633 Message: lib/log: Defining dependency "log" 00:02:09.633 Message: lib/kvargs: Defining dependency "kvargs" 00:02:09.633 Message: lib/telemetry: Defining dependency "telemetry" 00:02:09.633 Library rt found: YES 00:02:09.633 Checking for function "getentropy" : NO 00:02:09.633 Message: lib/eal: Defining dependency "eal" 00:02:09.633 Message: lib/ring: Defining dependency "ring" 00:02:09.633 Message: lib/rcu: Defining dependency "rcu" 00:02:09.633 Message: lib/mempool: Defining dependency "mempool" 00:02:09.633 Message: lib/mbuf: Defining dependency "mbuf" 00:02:09.633 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:09.633 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.633 Compiler for C supports arguments -mpclmul: YES 00:02:09.633 Compiler for C supports arguments -maes: YES 00:02:09.633 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.634 Compiler for C supports arguments -mavx512bw: YES 00:02:09.634 Compiler for C supports arguments -mavx512dq: YES 00:02:09.634 Compiler for C supports arguments -mavx512vl: YES 00:02:09.634 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:09.634 Compiler for C supports arguments -mavx2: YES 00:02:09.634 Compiler for C supports arguments -mavx: YES 00:02:09.634 Message: lib/net: Defining dependency "net" 00:02:09.634 Message: lib/meter: Defining dependency "meter" 00:02:09.634 Message: lib/ethdev: Defining dependency "ethdev" 00:02:09.634 Message: lib/pci: Defining dependency "pci" 00:02:09.634 Message: lib/cmdline: Defining dependency "cmdline" 00:02:09.634 Message: lib/hash: Defining dependency "hash" 00:02:09.634 Message: lib/timer: Defining dependency "timer" 00:02:09.634 Message: lib/compressdev: Defining dependency "compressdev" 00:02:09.634 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:09.634 Message: lib/dmadev: Defining dependency "dmadev" 00:02:09.634 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:09.634 Message: lib/power: Defining dependency "power" 00:02:09.634 Message: lib/reorder: Defining dependency "reorder" 00:02:09.634 Message: lib/security: Defining dependency "security" 00:02:09.634 Has header "linux/userfaultfd.h" : YES 00:02:09.634 Has header "linux/vduse.h" : YES 00:02:09.634 Message: lib/vhost: Defining dependency "vhost" 00:02:09.634 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:09.634 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:09.634 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.634 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.634 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:09.634 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:09.634 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:09.634 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:09.634 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:09.634 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:09.634 Program doxygen found: YES (/usr/bin/doxygen) 00:02:09.634 Configuring doxy-api-html.conf using configuration 00:02:09.634 Configuring doxy-api-man.conf using configuration 00:02:09.634 Program mandb found: YES (/usr/bin/mandb) 00:02:09.634 Program sphinx-build found: NO 00:02:09.634 Configuring rte_build_config.h using configuration 00:02:09.634 Message: 00:02:09.634 ================= 00:02:09.634 Applications Enabled 00:02:09.634 ================= 00:02:09.634 00:02:09.634 apps: 00:02:09.634 00:02:09.634 00:02:09.634 Message: 00:02:09.634 ================= 00:02:09.634 Libraries Enabled 00:02:09.634 ================= 00:02:09.634 00:02:09.634 libs: 00:02:09.634 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:09.634 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:09.634 cryptodev, dmadev, power, reorder, security, vhost, 00:02:09.634 00:02:09.634 Message: 00:02:09.634 =============== 00:02:09.634 Drivers Enabled 00:02:09.634 =============== 00:02:09.634 00:02:09.634 common: 00:02:09.634 00:02:09.634 bus: 00:02:09.634 pci, vdev, 00:02:09.634 mempool: 00:02:09.634 ring, 00:02:09.634 dma: 00:02:09.634 00:02:09.634 net: 00:02:09.634 00:02:09.634 crypto: 00:02:09.634 00:02:09.634 compress: 00:02:09.634 00:02:09.634 vdpa: 00:02:09.634 00:02:09.634 00:02:09.634 Message: 00:02:09.634 ================= 00:02:09.634 Content Skipped 00:02:09.634 ================= 00:02:09.634 00:02:09.634 apps: 00:02:09.634 dumpcap: explicitly disabled via build config 00:02:09.634 graph: explicitly disabled via build config 00:02:09.634 pdump: explicitly disabled via build config 00:02:09.634 proc-info: explicitly disabled via build config 00:02:09.634 test-acl: explicitly disabled via build config 00:02:09.634 test-bbdev: explicitly disabled via build config 00:02:09.634 test-cmdline: explicitly disabled via build config 00:02:09.634 test-compress-perf: explicitly disabled via build config 00:02:09.634 test-crypto-perf: explicitly disabled via build config 00:02:09.634 test-dma-perf: explicitly disabled via build config 00:02:09.634 test-eventdev: explicitly disabled via build config 00:02:09.634 test-fib: explicitly disabled via build config 00:02:09.634 test-flow-perf: explicitly disabled via build config 00:02:09.634 test-gpudev: explicitly disabled via build config 00:02:09.634 test-mldev: explicitly disabled via build config 00:02:09.634 test-pipeline: explicitly disabled via build config 00:02:09.634 test-pmd: explicitly disabled via build config 00:02:09.634 test-regex: explicitly disabled via build config 00:02:09.634 test-sad: explicitly disabled via build config 00:02:09.634 test-security-perf: explicitly disabled via build config 00:02:09.634 00:02:09.634 libs: 00:02:09.634 metrics: explicitly disabled via build config 00:02:09.634 acl: explicitly disabled via build config 00:02:09.634 bbdev: explicitly disabled via build config 00:02:09.634 bitratestats: explicitly disabled via build config 00:02:09.634 bpf: explicitly disabled via build config 00:02:09.634 cfgfile: explicitly disabled via build config 00:02:09.634 distributor: explicitly disabled via build config 00:02:09.634 efd: explicitly disabled via build config 00:02:09.634 eventdev: explicitly disabled via build config 00:02:09.634 dispatcher: explicitly disabled via build config 00:02:09.634 gpudev: explicitly disabled via build config 00:02:09.634 gro: explicitly disabled via build config 00:02:09.634 gso: explicitly disabled via build config 00:02:09.634 ip_frag: explicitly disabled via build config 00:02:09.634 jobstats: explicitly disabled via build config 00:02:09.634 latencystats: explicitly disabled via build config 00:02:09.634 lpm: explicitly disabled via build config 00:02:09.634 member: explicitly disabled via build config 00:02:09.634 pcapng: explicitly disabled via build config 00:02:09.634 rawdev: explicitly disabled via build config 00:02:09.634 regexdev: explicitly disabled via build config 00:02:09.634 mldev: explicitly disabled via build config 00:02:09.634 rib: explicitly disabled via build config 00:02:09.634 sched: explicitly disabled via build config 00:02:09.634 stack: explicitly disabled via build config 00:02:09.634 ipsec: explicitly disabled via build config 00:02:09.634 pdcp: explicitly disabled via build config 00:02:09.634 fib: explicitly disabled via build config 00:02:09.634 port: explicitly disabled via build config 00:02:09.634 pdump: explicitly disabled via build config 00:02:09.634 table: explicitly disabled via build config 00:02:09.634 pipeline: explicitly disabled via build config 00:02:09.634 graph: explicitly disabled via build config 00:02:09.634 node: explicitly disabled via build config 00:02:09.634 00:02:09.634 drivers: 00:02:09.634 common/cpt: not in enabled drivers build config 00:02:09.634 common/dpaax: not in enabled drivers build config 00:02:09.634 common/iavf: not in enabled drivers build config 00:02:09.634 common/idpf: not in enabled drivers build config 00:02:09.634 common/mvep: not in enabled drivers build config 00:02:09.634 common/octeontx: not in enabled drivers build config 00:02:09.634 bus/auxiliary: not in enabled drivers build config 00:02:09.634 bus/cdx: not in enabled drivers build config 00:02:09.634 bus/dpaa: not in enabled drivers build config 00:02:09.634 bus/fslmc: not in enabled drivers build config 00:02:09.634 bus/ifpga: not in enabled drivers build config 00:02:09.634 bus/platform: not in enabled drivers build config 00:02:09.634 bus/vmbus: not in enabled drivers build config 00:02:09.634 common/cnxk: not in enabled drivers build config 00:02:09.634 common/mlx5: not in enabled drivers build config 00:02:09.634 common/nfp: not in enabled drivers build config 00:02:09.634 common/qat: not in enabled drivers build config 00:02:09.634 common/sfc_efx: not in enabled drivers build config 00:02:09.634 mempool/bucket: not in enabled drivers build config 00:02:09.634 mempool/cnxk: not in enabled drivers build config 00:02:09.634 mempool/dpaa: not in enabled drivers build config 00:02:09.634 mempool/dpaa2: not in enabled drivers build config 00:02:09.634 mempool/octeontx: not in enabled drivers build config 00:02:09.634 mempool/stack: not in enabled drivers build config 00:02:09.634 dma/cnxk: not in enabled drivers build config 00:02:09.634 dma/dpaa: not in enabled drivers build config 00:02:09.634 dma/dpaa2: not in enabled drivers build config 00:02:09.634 dma/hisilicon: not in enabled drivers build config 00:02:09.634 dma/idxd: not in enabled drivers build config 00:02:09.634 dma/ioat: not in enabled drivers build config 00:02:09.634 dma/skeleton: not in enabled drivers build config 00:02:09.634 net/af_packet: not in enabled drivers build config 00:02:09.634 net/af_xdp: not in enabled drivers build config 00:02:09.634 net/ark: not in enabled drivers build config 00:02:09.634 net/atlantic: not in enabled drivers build config 00:02:09.634 net/avp: not in enabled drivers build config 00:02:09.634 net/axgbe: not in enabled drivers build config 00:02:09.634 net/bnx2x: not in enabled drivers build config 00:02:09.634 net/bnxt: not in enabled drivers build config 00:02:09.634 net/bonding: not in enabled drivers build config 00:02:09.634 net/cnxk: not in enabled drivers build config 00:02:09.634 net/cpfl: not in enabled drivers build config 00:02:09.634 net/cxgbe: not in enabled drivers build config 00:02:09.634 net/dpaa: not in enabled drivers build config 00:02:09.634 net/dpaa2: not in enabled drivers build config 00:02:09.634 net/e1000: not in enabled drivers build config 00:02:09.634 net/ena: not in enabled drivers build config 00:02:09.634 net/enetc: not in enabled drivers build config 00:02:09.634 net/enetfec: not in enabled drivers build config 00:02:09.634 net/enic: not in enabled drivers build config 00:02:09.634 net/failsafe: not in enabled drivers build config 00:02:09.634 net/fm10k: not in enabled drivers build config 00:02:09.634 net/gve: not in enabled drivers build config 00:02:09.634 net/hinic: not in enabled drivers build config 00:02:09.634 net/hns3: not in enabled drivers build config 00:02:09.634 net/i40e: not in enabled drivers build config 00:02:09.634 net/iavf: not in enabled drivers build config 00:02:09.634 net/ice: not in enabled drivers build config 00:02:09.634 net/idpf: not in enabled drivers build config 00:02:09.634 net/igc: not in enabled drivers build config 00:02:09.634 net/ionic: not in enabled drivers build config 00:02:09.634 net/ipn3ke: not in enabled drivers build config 00:02:09.634 net/ixgbe: not in enabled drivers build config 00:02:09.634 net/mana: not in enabled drivers build config 00:02:09.634 net/memif: not in enabled drivers build config 00:02:09.634 net/mlx4: not in enabled drivers build config 00:02:09.634 net/mlx5: not in enabled drivers build config 00:02:09.634 net/mvneta: not in enabled drivers build config 00:02:09.634 net/mvpp2: not in enabled drivers build config 00:02:09.634 net/netvsc: not in enabled drivers build config 00:02:09.634 net/nfb: not in enabled drivers build config 00:02:09.634 net/nfp: not in enabled drivers build config 00:02:09.634 net/ngbe: not in enabled drivers build config 00:02:09.634 net/null: not in enabled drivers build config 00:02:09.634 net/octeontx: not in enabled drivers build config 00:02:09.634 net/octeon_ep: not in enabled drivers build config 00:02:09.634 net/pcap: not in enabled drivers build config 00:02:09.634 net/pfe: not in enabled drivers build config 00:02:09.634 net/qede: not in enabled drivers build config 00:02:09.634 net/ring: not in enabled drivers build config 00:02:09.634 net/sfc: not in enabled drivers build config 00:02:09.634 net/softnic: not in enabled drivers build config 00:02:09.634 net/tap: not in enabled drivers build config 00:02:09.634 net/thunderx: not in enabled drivers build config 00:02:09.634 net/txgbe: not in enabled drivers build config 00:02:09.634 net/vdev_netvsc: not in enabled drivers build config 00:02:09.634 net/vhost: not in enabled drivers build config 00:02:09.634 net/virtio: not in enabled drivers build config 00:02:09.634 net/vmxnet3: not in enabled drivers build config 00:02:09.634 raw/*: missing internal dependency, "rawdev" 00:02:09.634 crypto/armv8: not in enabled drivers build config 00:02:09.634 crypto/bcmfs: not in enabled drivers build config 00:02:09.634 crypto/caam_jr: not in enabled drivers build config 00:02:09.634 crypto/ccp: not in enabled drivers build config 00:02:09.634 crypto/cnxk: not in enabled drivers build config 00:02:09.634 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.634 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.634 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.634 crypto/mlx5: not in enabled drivers build config 00:02:09.634 crypto/mvsam: not in enabled drivers build config 00:02:09.634 crypto/nitrox: not in enabled drivers build config 00:02:09.634 crypto/null: not in enabled drivers build config 00:02:09.634 crypto/octeontx: not in enabled drivers build config 00:02:09.634 crypto/openssl: not in enabled drivers build config 00:02:09.634 crypto/scheduler: not in enabled drivers build config 00:02:09.634 crypto/uadk: not in enabled drivers build config 00:02:09.634 crypto/virtio: not in enabled drivers build config 00:02:09.634 compress/isal: not in enabled drivers build config 00:02:09.634 compress/mlx5: not in enabled drivers build config 00:02:09.634 compress/octeontx: not in enabled drivers build config 00:02:09.634 compress/zlib: not in enabled drivers build config 00:02:09.634 regex/*: missing internal dependency, "regexdev" 00:02:09.634 ml/*: missing internal dependency, "mldev" 00:02:09.634 vdpa/ifc: not in enabled drivers build config 00:02:09.634 vdpa/mlx5: not in enabled drivers build config 00:02:09.635 vdpa/nfp: not in enabled drivers build config 00:02:09.635 vdpa/sfc: not in enabled drivers build config 00:02:09.635 event/*: missing internal dependency, "eventdev" 00:02:09.635 baseband/*: missing internal dependency, "bbdev" 00:02:09.635 gpu/*: missing internal dependency, "gpudev" 00:02:09.635 00:02:09.635 00:02:09.635 Build targets in project: 85 00:02:09.635 00:02:09.635 DPDK 23.11.0 00:02:09.635 00:02:09.635 User defined options 00:02:09.635 buildtype : debug 00:02:09.635 default_library : shared 00:02:09.635 libdir : lib 00:02:09.635 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.635 b_sanitize : address 00:02:09.635 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:09.635 c_link_args : 00:02:09.635 cpu_instruction_set: native 00:02:09.635 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:09.635 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:09.635 enable_docs : false 00:02:09.635 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:09.635 enable_kmods : false 00:02:09.635 tests : false 00:02:09.635 00:02:09.635 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.635 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:09.893 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.893 [2/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.893 [3/265] Linking static target lib/librte_log.a 00:02:09.893 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.893 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.893 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.893 [7/265] Linking static target lib/librte_kvargs.a 00:02:09.893 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.893 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:10.151 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:10.410 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.668 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:10.668 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:10.668 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:10.668 [15/265] Linking static target lib/librte_telemetry.a 00:02:10.668 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:10.926 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:10.926 [18/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.926 [19/265] Linking target lib/librte_log.so.24.0 00:02:10.926 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:10.926 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:11.185 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:11.185 [23/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:11.185 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:11.185 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:11.185 [26/265] Linking target lib/librte_kvargs.so.24.0 00:02:11.443 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:11.443 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:11.443 [29/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.701 [30/265] Linking target lib/librte_telemetry.so.24.0 00:02:11.701 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:11.701 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:11.701 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:11.701 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:11.960 [35/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:11.960 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:12.218 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:12.218 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:12.218 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:12.218 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:12.218 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:12.218 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:12.476 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:12.476 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:12.476 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:12.476 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:12.735 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:12.735 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:12.735 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:12.993 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:13.252 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:13.252 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:13.252 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:13.510 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:13.510 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:13.510 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:13.510 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:13.510 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:13.510 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:13.769 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.769 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:13.769 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:13.769 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:14.027 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:14.286 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:14.286 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:14.286 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:14.286 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.545 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:14.804 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:14.804 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.804 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.804 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:14.804 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.804 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:14.804 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:14.804 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:14.804 [78/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.370 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:15.370 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:15.370 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:15.370 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.370 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.370 [84/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:15.370 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:15.628 [86/265] Linking static target lib/librte_eal.a 00:02:15.628 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.628 [88/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.628 [89/265] Linking static target lib/librte_rcu.a 00:02:15.887 [90/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.887 [91/265] Linking static target lib/librte_ring.a 00:02:15.887 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.887 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:16.145 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.145 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:16.145 [96/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:16.145 [97/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.145 [98/265] Linking static target lib/librte_mempool.a 00:02:16.403 [99/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.403 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:16.403 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:16.661 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:16.661 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:16.661 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:16.920 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:17.178 [106/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.178 [107/265] Linking static target lib/librte_net.a 00:02:17.178 [108/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:17.178 [109/265] Linking static target lib/librte_mbuf.a 00:02:17.178 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:17.178 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:17.178 [112/265] Linking static target lib/librte_meter.a 00:02:17.437 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:17.437 [114/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.695 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:17.695 [116/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.695 [117/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.695 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.263 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.263 [120/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.263 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:18.263 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.522 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.781 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:18.781 [125/265] Linking static target lib/librte_pci.a 00:02:18.781 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:18.781 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:18.781 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.781 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.781 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:18.781 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.039 [132/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.039 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.039 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.039 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.039 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.298 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.298 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.298 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.298 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.298 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.298 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.615 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.615 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.615 [145/265] Linking static target lib/librte_cmdline.a 00:02:19.888 [146/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.888 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.888 [148/265] Linking static target lib/librte_timer.a 00:02:20.147 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.147 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.406 [151/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.406 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.664 [153/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.664 [154/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.664 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:20.664 [156/265] Linking static target lib/librte_hash.a 00:02:20.923 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.923 [158/265] Linking static target lib/librte_compressdev.a 00:02:20.923 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.923 [160/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:20.923 [161/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.923 [162/265] Linking static target lib/librte_ethdev.a 00:02:21.182 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:21.182 [164/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.182 [165/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.182 [166/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.182 [167/265] Linking static target lib/librte_dmadev.a 00:02:21.440 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.440 [169/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:21.440 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.698 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.698 [172/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.698 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.956 [174/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.956 [175/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:21.956 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:21.956 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:21.956 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.214 [179/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.214 [180/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.214 [181/265] Linking static target lib/librte_cryptodev.a 00:02:22.214 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.214 [183/265] Linking static target lib/librte_power.a 00:02:22.781 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:22.781 [185/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.781 [186/265] Linking static target lib/librte_reorder.a 00:02:22.781 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:22.781 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.781 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.781 [190/265] Linking static target lib/librte_security.a 00:02:23.039 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.039 [192/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.298 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:23.298 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.298 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:23.557 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:23.815 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:23.815 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.815 [199/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.815 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:23.815 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.815 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.380 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:24.380 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.380 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.380 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.381 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.381 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:24.638 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:24.638 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:24.638 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.638 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.638 [213/265] Linking static target drivers/librte_bus_vdev.a 00:02:24.638 [214/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.638 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.638 [216/265] Linking static target drivers/librte_bus_pci.a 00:02:24.638 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:24.638 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.896 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.896 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.896 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.897 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.897 [223/265] Linking static target drivers/librte_mempool_ring.a 00:02:25.155 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.721 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.721 [226/265] Linking target lib/librte_eal.so.24.0 00:02:25.979 [227/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:25.979 [228/265] Linking target lib/librte_pci.so.24.0 00:02:25.979 [229/265] Linking target lib/librte_dmadev.so.24.0 00:02:25.979 [230/265] Linking target lib/librte_timer.so.24.0 00:02:25.979 [231/265] Linking target lib/librte_ring.so.24.0 00:02:25.979 [232/265] Linking target lib/librte_meter.so.24.0 00:02:25.979 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:26.237 [234/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:26.237 [235/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:26.237 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:26.237 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:26.237 [238/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:26.237 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:26.237 [240/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:26.237 [241/265] Linking target lib/librte_rcu.so.24.0 00:02:26.237 [242/265] Linking target lib/librte_mempool.so.24.0 00:02:26.496 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:26.496 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:26.496 [245/265] Linking target lib/librte_mbuf.so.24.0 00:02:26.496 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:26.496 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:26.755 [248/265] Linking target lib/librte_net.so.24.0 00:02:26.755 [249/265] Linking target lib/librte_compressdev.so.24.0 00:02:26.755 [250/265] Linking target lib/librte_reorder.so.24.0 00:02:26.755 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:02:26.755 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:26.755 [253/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:27.013 [254/265] Linking target lib/librte_cmdline.so.24.0 00:02:27.013 [255/265] Linking target lib/librte_hash.so.24.0 00:02:27.013 [256/265] Linking target lib/librte_security.so.24.0 00:02:27.013 [257/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:27.272 [258/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.530 [259/265] Linking target lib/librte_ethdev.so.24.0 00:02:27.530 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:27.789 [261/265] Linking target lib/librte_power.so.24.0 00:02:30.321 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:30.321 [263/265] Linking static target lib/librte_vhost.a 00:02:31.699 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.958 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:31.958 INFO: autodetecting backend as ninja 00:02:31.958 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:32.895 CC lib/ut/ut.o 00:02:32.895 CC lib/log/log.o 00:02:32.895 CC lib/log/log_flags.o 00:02:32.895 CC lib/log/log_deprecated.o 00:02:32.895 CC lib/ut_mock/mock.o 00:02:33.154 LIB libspdk_ut_mock.a 00:02:33.154 LIB libspdk_ut.a 00:02:33.154 SO libspdk_ut_mock.so.6.0 00:02:33.154 LIB libspdk_log.a 00:02:33.154 SO libspdk_ut.so.2.0 00:02:33.154 SO libspdk_log.so.7.0 00:02:33.413 SYMLINK libspdk_ut_mock.so 00:02:33.413 SYMLINK libspdk_ut.so 00:02:33.413 SYMLINK libspdk_log.so 00:02:33.413 CC lib/util/bit_array.o 00:02:33.413 CC lib/util/base64.o 00:02:33.413 CC lib/ioat/ioat.o 00:02:33.413 CC lib/util/cpuset.o 00:02:33.413 CC lib/util/crc32.o 00:02:33.413 CC lib/util/crc16.o 00:02:33.413 CXX lib/trace_parser/trace.o 00:02:33.413 CC lib/util/crc32c.o 00:02:33.413 CC lib/dma/dma.o 00:02:33.671 CC lib/vfio_user/host/vfio_user_pci.o 00:02:33.671 CC lib/util/crc32_ieee.o 00:02:33.671 CC lib/vfio_user/host/vfio_user.o 00:02:33.671 CC lib/util/crc64.o 00:02:33.671 CC lib/util/dif.o 00:02:33.930 CC lib/util/fd.o 00:02:33.930 LIB libspdk_dma.a 00:02:33.930 SO libspdk_dma.so.4.0 00:02:33.930 CC lib/util/file.o 00:02:33.930 CC lib/util/hexlify.o 00:02:33.930 SYMLINK libspdk_dma.so 00:02:33.930 CC lib/util/iov.o 00:02:33.930 CC lib/util/math.o 00:02:33.930 CC lib/util/pipe.o 00:02:33.930 CC lib/util/strerror_tls.o 00:02:33.930 LIB libspdk_ioat.a 00:02:33.930 LIB libspdk_vfio_user.a 00:02:34.188 SO libspdk_ioat.so.7.0 00:02:34.188 SO libspdk_vfio_user.so.5.0 00:02:34.188 CC lib/util/string.o 00:02:34.188 SYMLINK libspdk_ioat.so 00:02:34.188 CC lib/util/uuid.o 00:02:34.188 CC lib/util/fd_group.o 00:02:34.188 CC lib/util/xor.o 00:02:34.188 SYMLINK libspdk_vfio_user.so 00:02:34.188 CC lib/util/zipf.o 00:02:34.755 LIB libspdk_util.a 00:02:34.755 SO libspdk_util.so.9.0 00:02:34.755 LIB libspdk_trace_parser.a 00:02:35.021 SO libspdk_trace_parser.so.5.0 00:02:35.021 SYMLINK libspdk_util.so 00:02:35.021 SYMLINK libspdk_trace_parser.so 00:02:35.021 CC lib/vmd/vmd.o 00:02:35.021 CC lib/vmd/led.o 00:02:35.021 CC lib/idxd/idxd.o 00:02:35.021 CC lib/conf/conf.o 00:02:35.021 CC lib/idxd/idxd_user.o 00:02:35.021 CC lib/json/json_parse.o 00:02:35.021 CC lib/rdma/common.o 00:02:35.021 CC lib/rdma/rdma_verbs.o 00:02:35.021 CC lib/json/json_util.o 00:02:35.021 CC lib/env_dpdk/env.o 00:02:35.307 CC lib/env_dpdk/memory.o 00:02:35.566 CC lib/json/json_write.o 00:02:35.566 LIB libspdk_conf.a 00:02:35.566 CC lib/env_dpdk/pci.o 00:02:35.566 SO libspdk_conf.so.6.0 00:02:35.566 CC lib/env_dpdk/init.o 00:02:35.566 CC lib/env_dpdk/threads.o 00:02:35.566 LIB libspdk_rdma.a 00:02:35.824 SYMLINK libspdk_conf.so 00:02:35.824 CC lib/env_dpdk/pci_ioat.o 00:02:35.824 SO libspdk_rdma.so.6.0 00:02:35.824 SYMLINK libspdk_rdma.so 00:02:35.824 CC lib/env_dpdk/pci_virtio.o 00:02:35.824 CC lib/env_dpdk/pci_vmd.o 00:02:36.086 CC lib/env_dpdk/pci_idxd.o 00:02:36.086 LIB libspdk_json.a 00:02:36.086 CC lib/env_dpdk/pci_event.o 00:02:36.086 CC lib/env_dpdk/sigbus_handler.o 00:02:36.086 SO libspdk_json.so.6.0 00:02:36.344 CC lib/env_dpdk/pci_dpdk.o 00:02:36.344 SYMLINK libspdk_json.so 00:02:36.344 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:36.344 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:36.344 LIB libspdk_idxd.a 00:02:36.344 CC lib/jsonrpc/jsonrpc_server.o 00:02:36.344 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:36.344 CC lib/jsonrpc/jsonrpc_client.o 00:02:36.344 SO libspdk_idxd.so.12.0 00:02:36.602 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.602 SYMLINK libspdk_idxd.so 00:02:36.602 LIB libspdk_vmd.a 00:02:36.602 SO libspdk_vmd.so.6.0 00:02:36.861 SYMLINK libspdk_vmd.so 00:02:36.861 LIB libspdk_jsonrpc.a 00:02:36.861 SO libspdk_jsonrpc.so.6.0 00:02:36.861 SYMLINK libspdk_jsonrpc.so 00:02:37.118 CC lib/rpc/rpc.o 00:02:37.375 LIB libspdk_rpc.a 00:02:37.375 SO libspdk_rpc.so.6.0 00:02:37.633 SYMLINK libspdk_rpc.so 00:02:37.633 CC lib/sock/sock.o 00:02:37.633 CC lib/notify/notify.o 00:02:37.633 CC lib/sock/sock_rpc.o 00:02:37.633 CC lib/notify/notify_rpc.o 00:02:37.633 CC lib/trace/trace.o 00:02:37.633 CC lib/trace/trace_flags.o 00:02:37.633 CC lib/trace/trace_rpc.o 00:02:37.892 LIB libspdk_env_dpdk.a 00:02:37.892 SO libspdk_env_dpdk.so.14.0 00:02:37.892 LIB libspdk_notify.a 00:02:37.892 SO libspdk_notify.so.6.0 00:02:38.150 SYMLINK libspdk_notify.so 00:02:38.150 LIB libspdk_trace.a 00:02:38.150 SYMLINK libspdk_env_dpdk.so 00:02:38.150 SO libspdk_trace.so.10.0 00:02:38.150 SYMLINK libspdk_trace.so 00:02:38.407 LIB libspdk_sock.a 00:02:38.407 SO libspdk_sock.so.9.0 00:02:38.407 CC lib/thread/thread.o 00:02:38.407 CC lib/thread/iobuf.o 00:02:38.407 SYMLINK libspdk_sock.so 00:02:38.665 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:38.665 CC lib/nvme/nvme_ctrlr.o 00:02:38.665 CC lib/nvme/nvme_fabric.o 00:02:38.665 CC lib/nvme/nvme_ns_cmd.o 00:02:38.665 CC lib/nvme/nvme_ns.o 00:02:38.665 CC lib/nvme/nvme_pcie_common.o 00:02:38.665 CC lib/nvme/nvme_pcie.o 00:02:38.666 CC lib/nvme/nvme_qpair.o 00:02:38.666 CC lib/nvme/nvme.o 00:02:39.600 CC lib/nvme/nvme_quirks.o 00:02:39.600 CC lib/nvme/nvme_transport.o 00:02:39.858 CC lib/nvme/nvme_discovery.o 00:02:40.117 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:40.117 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:40.117 CC lib/nvme/nvme_tcp.o 00:02:40.375 CC lib/nvme/nvme_opal.o 00:02:40.375 CC lib/nvme/nvme_io_msg.o 00:02:40.632 CC lib/nvme/nvme_poll_group.o 00:02:40.632 CC lib/nvme/nvme_zns.o 00:02:40.890 CC lib/nvme/nvme_cuse.o 00:02:40.890 CC lib/nvme/nvme_vfio_user.o 00:02:41.148 LIB libspdk_thread.a 00:02:41.149 SO libspdk_thread.so.10.0 00:02:41.149 CC lib/nvme/nvme_rdma.o 00:02:41.149 SYMLINK libspdk_thread.so 00:02:41.407 CC lib/accel/accel.o 00:02:41.407 CC lib/blob/blobstore.o 00:02:41.407 CC lib/accel/accel_rpc.o 00:02:41.407 CC lib/accel/accel_sw.o 00:02:41.665 CC lib/blob/request.o 00:02:41.665 CC lib/init/json_config.o 00:02:41.923 CC lib/init/subsystem.o 00:02:41.923 CC lib/init/subsystem_rpc.o 00:02:41.923 CC lib/blob/zeroes.o 00:02:41.923 CC lib/blob/blob_bs_dev.o 00:02:42.181 CC lib/init/rpc.o 00:02:42.181 CC lib/virtio/virtio_vhost_user.o 00:02:42.181 CC lib/virtio/virtio.o 00:02:42.181 CC lib/virtio/virtio_pci.o 00:02:42.181 CC lib/virtio/virtio_vfio_user.o 00:02:42.439 LIB libspdk_init.a 00:02:42.439 SO libspdk_init.so.5.0 00:02:42.439 SYMLINK libspdk_init.so 00:02:42.697 CC lib/event/app.o 00:02:42.698 CC lib/event/reactor.o 00:02:42.698 CC lib/event/log_rpc.o 00:02:42.698 CC lib/event/app_rpc.o 00:02:42.698 CC lib/event/scheduler_static.o 00:02:42.698 LIB libspdk_nvme.a 00:02:42.956 LIB libspdk_virtio.a 00:02:42.956 SO libspdk_virtio.so.7.0 00:02:42.956 SO libspdk_nvme.so.13.0 00:02:43.214 SYMLINK libspdk_virtio.so 00:02:43.472 LIB libspdk_event.a 00:02:43.472 SYMLINK libspdk_nvme.so 00:02:43.472 SO libspdk_event.so.13.0 00:02:43.472 LIB libspdk_accel.a 00:02:43.472 SYMLINK libspdk_event.so 00:02:43.730 SO libspdk_accel.so.15.0 00:02:43.730 SYMLINK libspdk_accel.so 00:02:43.730 CC lib/bdev/bdev.o 00:02:43.730 CC lib/bdev/bdev_rpc.o 00:02:43.730 CC lib/bdev/bdev_zone.o 00:02:43.730 CC lib/bdev/part.o 00:02:43.730 CC lib/bdev/scsi_nvme.o 00:02:46.256 LIB libspdk_blob.a 00:02:46.256 SO libspdk_blob.so.11.0 00:02:46.256 SYMLINK libspdk_blob.so 00:02:46.256 CC lib/blobfs/blobfs.o 00:02:46.256 CC lib/lvol/lvol.o 00:02:46.256 CC lib/blobfs/tree.o 00:02:47.628 LIB libspdk_bdev.a 00:02:47.628 LIB libspdk_lvol.a 00:02:47.628 SO libspdk_bdev.so.15.0 00:02:47.628 SO libspdk_lvol.so.10.0 00:02:47.628 SYMLINK libspdk_lvol.so 00:02:47.628 SYMLINK libspdk_bdev.so 00:02:47.628 LIB libspdk_blobfs.a 00:02:47.628 SO libspdk_blobfs.so.10.0 00:02:47.886 CC lib/ftl/ftl_core.o 00:02:47.886 CC lib/ftl/ftl_init.o 00:02:47.886 CC lib/ftl/ftl_layout.o 00:02:47.886 CC lib/ftl/ftl_debug.o 00:02:47.886 CC lib/ftl/ftl_io.o 00:02:47.886 CC lib/nbd/nbd.o 00:02:47.886 CC lib/scsi/dev.o 00:02:47.886 CC lib/nvmf/ctrlr.o 00:02:47.886 CC lib/ublk/ublk.o 00:02:47.886 SYMLINK libspdk_blobfs.so 00:02:47.886 CC lib/nvmf/ctrlr_discovery.o 00:02:48.143 CC lib/nvmf/ctrlr_bdev.o 00:02:48.143 CC lib/nvmf/subsystem.o 00:02:48.143 CC lib/scsi/lun.o 00:02:48.401 CC lib/scsi/port.o 00:02:48.401 CC lib/scsi/scsi.o 00:02:48.659 CC lib/ftl/ftl_sb.o 00:02:48.659 CC lib/nvmf/nvmf.o 00:02:48.659 CC lib/nbd/nbd_rpc.o 00:02:48.659 CC lib/nvmf/nvmf_rpc.o 00:02:48.917 CC lib/scsi/scsi_bdev.o 00:02:48.917 CC lib/nvmf/transport.o 00:02:48.917 CC lib/ftl/ftl_l2p.o 00:02:48.917 CC lib/nvmf/tcp.o 00:02:48.917 LIB libspdk_nbd.a 00:02:49.175 SO libspdk_nbd.so.7.0 00:02:49.175 SYMLINK libspdk_nbd.so 00:02:49.175 CC lib/nvmf/rdma.o 00:02:49.175 CC lib/ublk/ublk_rpc.o 00:02:49.433 CC lib/ftl/ftl_l2p_flat.o 00:02:49.433 LIB libspdk_ublk.a 00:02:49.693 SO libspdk_ublk.so.3.0 00:02:49.693 CC lib/ftl/ftl_nv_cache.o 00:02:49.693 CC lib/ftl/ftl_band.o 00:02:49.693 SYMLINK libspdk_ublk.so 00:02:49.693 CC lib/ftl/ftl_band_ops.o 00:02:49.951 CC lib/scsi/scsi_pr.o 00:02:50.209 CC lib/ftl/ftl_writer.o 00:02:50.467 CC lib/ftl/ftl_rq.o 00:02:50.467 CC lib/ftl/ftl_reloc.o 00:02:50.467 CC lib/scsi/scsi_rpc.o 00:02:50.467 CC lib/ftl/ftl_l2p_cache.o 00:02:50.726 CC lib/ftl/ftl_p2l.o 00:02:50.726 CC lib/ftl/ftl_trace.o 00:02:50.726 CC lib/ftl/mngt/ftl_mngt.o 00:02:50.726 CC lib/scsi/task.o 00:02:50.984 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:51.242 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:51.242 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:51.242 LIB libspdk_scsi.a 00:02:51.242 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:51.242 SO libspdk_scsi.so.9.0 00:02:51.500 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:51.500 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:51.500 SYMLINK libspdk_scsi.so 00:02:51.500 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:51.500 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:51.759 CC lib/iscsi/conn.o 00:02:51.759 CC lib/iscsi/init_grp.o 00:02:51.759 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:51.759 CC lib/iscsi/iscsi.o 00:02:52.017 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:52.017 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:52.017 CC lib/vhost/vhost.o 00:02:52.017 CC lib/iscsi/md5.o 00:02:52.275 CC lib/iscsi/param.o 00:02:52.275 CC lib/vhost/vhost_rpc.o 00:02:52.275 CC lib/vhost/vhost_scsi.o 00:02:52.534 CC lib/iscsi/portal_grp.o 00:02:52.792 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:52.792 CC lib/vhost/vhost_blk.o 00:02:52.792 CC lib/iscsi/tgt_node.o 00:02:52.792 CC lib/ftl/utils/ftl_conf.o 00:02:53.050 CC lib/iscsi/iscsi_subsystem.o 00:02:53.050 CC lib/iscsi/iscsi_rpc.o 00:02:53.050 CC lib/iscsi/task.o 00:02:53.308 CC lib/ftl/utils/ftl_md.o 00:02:53.308 CC lib/ftl/utils/ftl_mempool.o 00:02:53.566 CC lib/vhost/rte_vhost_user.o 00:02:53.566 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.824 CC lib/ftl/utils/ftl_property.o 00:02:53.824 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:53.824 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:54.082 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:54.082 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:54.083 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:54.083 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:54.083 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:54.341 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:54.341 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:54.341 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:54.341 CC lib/ftl/base/ftl_base_dev.o 00:02:54.341 CC lib/ftl/base/ftl_base_bdev.o 00:02:54.341 LIB libspdk_nvmf.a 00:02:54.599 SO libspdk_nvmf.so.18.0 00:02:54.858 LIB libspdk_ftl.a 00:02:54.858 SYMLINK libspdk_nvmf.so 00:02:55.116 SO libspdk_ftl.so.9.0 00:02:55.116 LIB libspdk_iscsi.a 00:02:55.116 SO libspdk_iscsi.so.8.0 00:02:55.374 LIB libspdk_vhost.a 00:02:55.374 SO libspdk_vhost.so.8.0 00:02:55.374 SYMLINK libspdk_ftl.so 00:02:55.374 SYMLINK libspdk_iscsi.so 00:02:55.632 SYMLINK libspdk_vhost.so 00:02:55.632 CC module/env_dpdk/env_dpdk_rpc.o 00:02:55.890 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:55.890 CC module/scheduler/gscheduler/gscheduler.o 00:02:55.890 CC module/accel/error/accel_error.o 00:02:55.890 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:55.890 CC module/accel/dsa/accel_dsa.o 00:02:55.890 CC module/blob/bdev/blob_bdev.o 00:02:55.890 CC module/sock/posix/posix.o 00:02:55.890 CC module/accel/iaa/accel_iaa.o 00:02:55.890 CC module/accel/ioat/accel_ioat.o 00:02:55.890 LIB libspdk_env_dpdk_rpc.a 00:02:56.149 SO libspdk_env_dpdk_rpc.so.6.0 00:02:56.149 CC module/accel/error/accel_error_rpc.o 00:02:56.149 CC module/accel/iaa/accel_iaa_rpc.o 00:02:56.149 LIB libspdk_scheduler_gscheduler.a 00:02:56.149 LIB libspdk_scheduler_dpdk_governor.a 00:02:56.149 SYMLINK libspdk_env_dpdk_rpc.so 00:02:56.149 CC module/accel/dsa/accel_dsa_rpc.o 00:02:56.149 SO libspdk_scheduler_gscheduler.so.4.0 00:02:56.149 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:56.149 LIB libspdk_scheduler_dynamic.a 00:02:56.149 LIB libspdk_accel_error.a 00:02:56.149 CC module/accel/ioat/accel_ioat_rpc.o 00:02:56.149 SYMLINK libspdk_scheduler_gscheduler.so 00:02:56.149 SO libspdk_scheduler_dynamic.so.4.0 00:02:56.149 SO libspdk_accel_error.so.2.0 00:02:56.149 LIB libspdk_accel_iaa.a 00:02:56.149 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:56.407 SO libspdk_accel_iaa.so.3.0 00:02:56.407 SYMLINK libspdk_accel_error.so 00:02:56.407 SYMLINK libspdk_scheduler_dynamic.so 00:02:56.407 LIB libspdk_blob_bdev.a 00:02:56.407 SO libspdk_blob_bdev.so.11.0 00:02:56.407 LIB libspdk_accel_ioat.a 00:02:56.407 LIB libspdk_accel_dsa.a 00:02:56.407 SYMLINK libspdk_accel_iaa.so 00:02:56.407 SO libspdk_accel_ioat.so.6.0 00:02:56.407 SO libspdk_accel_dsa.so.5.0 00:02:56.407 SYMLINK libspdk_blob_bdev.so 00:02:56.407 SYMLINK libspdk_accel_ioat.so 00:02:56.665 SYMLINK libspdk_accel_dsa.so 00:02:56.665 CC module/bdev/nvme/bdev_nvme.o 00:02:56.665 CC module/blobfs/bdev/blobfs_bdev.o 00:02:56.665 CC module/bdev/malloc/bdev_malloc.o 00:02:56.665 CC module/bdev/delay/vbdev_delay.o 00:02:56.665 CC module/bdev/gpt/gpt.o 00:02:56.665 CC module/bdev/passthru/vbdev_passthru.o 00:02:56.665 CC module/bdev/null/bdev_null.o 00:02:56.665 CC module/bdev/error/vbdev_error.o 00:02:56.665 CC module/bdev/lvol/vbdev_lvol.o 00:02:56.923 LIB libspdk_sock_posix.a 00:02:57.182 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:57.182 SO libspdk_sock_posix.so.6.0 00:02:57.182 CC module/bdev/gpt/vbdev_gpt.o 00:02:57.182 SYMLINK libspdk_sock_posix.so 00:02:57.182 CC module/bdev/null/bdev_null_rpc.o 00:02:57.182 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:57.440 CC module/bdev/error/vbdev_error_rpc.o 00:02:57.440 LIB libspdk_blobfs_bdev.a 00:02:57.440 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:57.440 LIB libspdk_bdev_null.a 00:02:57.440 SO libspdk_blobfs_bdev.so.6.0 00:02:57.440 SO libspdk_bdev_null.so.6.0 00:02:57.440 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:57.440 LIB libspdk_bdev_error.a 00:02:57.440 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:57.440 SYMLINK libspdk_blobfs_bdev.so 00:02:57.440 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:57.440 SO libspdk_bdev_error.so.6.0 00:02:57.440 SYMLINK libspdk_bdev_null.so 00:02:57.715 CC module/bdev/nvme/nvme_rpc.o 00:02:57.715 LIB libspdk_bdev_delay.a 00:02:57.715 SYMLINK libspdk_bdev_error.so 00:02:57.715 SO libspdk_bdev_delay.so.6.0 00:02:57.715 LIB libspdk_bdev_passthru.a 00:02:57.715 LIB libspdk_bdev_gpt.a 00:02:57.715 SO libspdk_bdev_passthru.so.6.0 00:02:57.715 SO libspdk_bdev_gpt.so.6.0 00:02:57.715 SYMLINK libspdk_bdev_delay.so 00:02:57.715 CC module/bdev/raid/bdev_raid.o 00:02:57.715 CC module/bdev/raid/bdev_raid_rpc.o 00:02:57.715 CC module/bdev/raid/bdev_raid_sb.o 00:02:57.715 LIB libspdk_bdev_malloc.a 00:02:57.980 SYMLINK libspdk_bdev_passthru.so 00:02:57.980 SYMLINK libspdk_bdev_gpt.so 00:02:57.980 CC module/bdev/raid/raid0.o 00:02:57.980 SO libspdk_bdev_malloc.so.6.0 00:02:57.980 SYMLINK libspdk_bdev_malloc.so 00:02:57.980 CC module/bdev/split/vbdev_split.o 00:02:57.980 CC module/bdev/split/vbdev_split_rpc.o 00:02:58.238 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:58.238 LIB libspdk_bdev_lvol.a 00:02:58.238 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:58.238 SO libspdk_bdev_lvol.so.6.0 00:02:58.238 CC module/bdev/raid/raid1.o 00:02:58.238 CC module/bdev/raid/concat.o 00:02:58.497 SYMLINK libspdk_bdev_lvol.so 00:02:58.497 CC module/bdev/nvme/bdev_mdns_client.o 00:02:58.497 LIB libspdk_bdev_split.a 00:02:58.497 SO libspdk_bdev_split.so.6.0 00:02:58.497 CC module/bdev/nvme/vbdev_opal.o 00:02:58.497 CC module/bdev/xnvme/bdev_xnvme.o 00:02:58.755 SYMLINK libspdk_bdev_split.so 00:02:58.755 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:58.755 CC module/bdev/aio/bdev_aio.o 00:02:59.014 CC module/bdev/aio/bdev_aio_rpc.o 00:02:59.014 CC module/bdev/ftl/bdev_ftl.o 00:02:59.014 LIB libspdk_bdev_zone_block.a 00:02:59.014 SO libspdk_bdev_zone_block.so.6.0 00:02:59.014 CC module/bdev/iscsi/bdev_iscsi.o 00:02:59.014 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:59.014 SYMLINK libspdk_bdev_zone_block.so 00:02:59.014 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:59.014 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:59.014 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:59.272 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:59.272 LIB libspdk_bdev_xnvme.a 00:02:59.272 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:59.530 SO libspdk_bdev_xnvme.so.3.0 00:02:59.530 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:59.530 LIB libspdk_bdev_ftl.a 00:02:59.530 SYMLINK libspdk_bdev_xnvme.so 00:02:59.530 SO libspdk_bdev_ftl.so.6.0 00:02:59.530 LIB libspdk_bdev_aio.a 00:02:59.530 SO libspdk_bdev_aio.so.6.0 00:02:59.789 SYMLINK libspdk_bdev_ftl.so 00:02:59.789 LIB libspdk_bdev_raid.a 00:02:59.789 LIB libspdk_bdev_iscsi.a 00:02:59.789 SYMLINK libspdk_bdev_aio.so 00:02:59.789 SO libspdk_bdev_raid.so.6.0 00:02:59.789 SO libspdk_bdev_iscsi.so.6.0 00:02:59.789 SYMLINK libspdk_bdev_iscsi.so 00:02:59.789 SYMLINK libspdk_bdev_raid.so 00:03:00.355 LIB libspdk_bdev_virtio.a 00:03:00.355 SO libspdk_bdev_virtio.so.6.0 00:03:00.355 SYMLINK libspdk_bdev_virtio.so 00:03:01.289 LIB libspdk_bdev_nvme.a 00:03:01.289 SO libspdk_bdev_nvme.so.7.0 00:03:01.289 SYMLINK libspdk_bdev_nvme.so 00:03:01.856 CC module/event/subsystems/sock/sock.o 00:03:01.856 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:01.856 CC module/event/subsystems/scheduler/scheduler.o 00:03:01.856 CC module/event/subsystems/vmd/vmd.o 00:03:01.856 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:01.856 CC module/event/subsystems/iobuf/iobuf.o 00:03:01.856 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:01.856 LIB libspdk_event_sock.a 00:03:01.856 LIB libspdk_event_vhost_blk.a 00:03:01.856 LIB libspdk_event_scheduler.a 00:03:01.856 LIB libspdk_event_vmd.a 00:03:01.856 SO libspdk_event_sock.so.5.0 00:03:01.856 SO libspdk_event_vhost_blk.so.3.0 00:03:02.114 LIB libspdk_event_iobuf.a 00:03:02.114 SO libspdk_event_scheduler.so.4.0 00:03:02.114 SO libspdk_event_vmd.so.6.0 00:03:02.114 SO libspdk_event_iobuf.so.3.0 00:03:02.114 SYMLINK libspdk_event_sock.so 00:03:02.114 SYMLINK libspdk_event_scheduler.so 00:03:02.114 SYMLINK libspdk_event_vhost_blk.so 00:03:02.114 SYMLINK libspdk_event_vmd.so 00:03:02.114 SYMLINK libspdk_event_iobuf.so 00:03:02.373 CC module/event/subsystems/accel/accel.o 00:03:02.631 LIB libspdk_event_accel.a 00:03:02.631 SO libspdk_event_accel.so.6.0 00:03:02.631 SYMLINK libspdk_event_accel.so 00:03:02.889 CC module/event/subsystems/bdev/bdev.o 00:03:02.889 LIB libspdk_event_bdev.a 00:03:03.145 SO libspdk_event_bdev.so.6.0 00:03:03.145 SYMLINK libspdk_event_bdev.so 00:03:03.145 CC module/event/subsystems/ublk/ublk.o 00:03:03.145 CC module/event/subsystems/scsi/scsi.o 00:03:03.145 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:03.145 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:03.145 CC module/event/subsystems/nbd/nbd.o 00:03:03.402 LIB libspdk_event_ublk.a 00:03:03.402 LIB libspdk_event_nbd.a 00:03:03.402 LIB libspdk_event_scsi.a 00:03:03.402 SO libspdk_event_ublk.so.3.0 00:03:03.660 SO libspdk_event_nbd.so.6.0 00:03:03.660 SO libspdk_event_scsi.so.6.0 00:03:03.660 SYMLINK libspdk_event_nbd.so 00:03:03.660 SYMLINK libspdk_event_ublk.so 00:03:03.660 SYMLINK libspdk_event_scsi.so 00:03:03.660 LIB libspdk_event_nvmf.a 00:03:03.660 SO libspdk_event_nvmf.so.6.0 00:03:03.918 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:03.918 CC module/event/subsystems/iscsi/iscsi.o 00:03:03.918 SYMLINK libspdk_event_nvmf.so 00:03:03.918 LIB libspdk_event_vhost_scsi.a 00:03:03.918 SO libspdk_event_vhost_scsi.so.3.0 00:03:03.918 LIB libspdk_event_iscsi.a 00:03:04.176 SYMLINK libspdk_event_vhost_scsi.so 00:03:04.176 SO libspdk_event_iscsi.so.6.0 00:03:04.176 SYMLINK libspdk_event_iscsi.so 00:03:04.176 SO libspdk.so.6.0 00:03:04.176 SYMLINK libspdk.so 00:03:04.435 CXX app/trace/trace.o 00:03:04.435 CC app/trace_record/trace_record.o 00:03:04.435 CC app/nvmf_tgt/nvmf_main.o 00:03:04.435 CC examples/ioat/perf/perf.o 00:03:04.435 CC app/iscsi_tgt/iscsi_tgt.o 00:03:04.435 CC examples/accel/perf/accel_perf.o 00:03:04.693 CC test/accel/dif/dif.o 00:03:04.693 CC test/app/bdev_svc/bdev_svc.o 00:03:04.693 CC examples/blob/hello_world/hello_blob.o 00:03:04.693 CC examples/bdev/hello_world/hello_bdev.o 00:03:04.693 LINK nvmf_tgt 00:03:04.951 LINK hello_blob 00:03:04.951 LINK iscsi_tgt 00:03:04.951 LINK bdev_svc 00:03:04.951 LINK spdk_trace_record 00:03:04.951 LINK ioat_perf 00:03:05.210 LINK hello_bdev 00:03:05.210 LINK spdk_trace 00:03:05.468 CC examples/ioat/verify/verify.o 00:03:05.468 CC examples/bdev/bdevperf/bdevperf.o 00:03:05.468 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:05.468 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:05.468 CC examples/blob/cli/blobcli.o 00:03:05.468 CC app/spdk_tgt/spdk_tgt.o 00:03:05.468 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:05.468 LINK dif 00:03:05.727 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:05.727 LINK accel_perf 00:03:05.727 CC test/app/histogram_perf/histogram_perf.o 00:03:05.727 LINK verify 00:03:05.985 LINK spdk_tgt 00:03:05.985 CC test/app/jsoncat/jsoncat.o 00:03:05.985 LINK histogram_perf 00:03:05.985 CC test/app/stub/stub.o 00:03:06.244 CC test/bdev/bdevio/bdevio.o 00:03:06.244 LINK jsoncat 00:03:06.244 CC app/spdk_lspci/spdk_lspci.o 00:03:06.244 LINK nvme_fuzz 00:03:06.503 LINK stub 00:03:06.503 CC test/blobfs/mkfs/mkfs.o 00:03:06.503 LINK bdevperf 00:03:06.503 LINK vhost_fuzz 00:03:06.503 LINK blobcli 00:03:06.503 LINK spdk_lspci 00:03:06.762 CC app/spdk_nvme_perf/perf.o 00:03:06.762 LINK bdevio 00:03:06.762 LINK mkfs 00:03:06.762 CC examples/nvme/hello_world/hello_world.o 00:03:07.019 CC examples/sock/hello_world/hello_sock.o 00:03:07.019 CC examples/vmd/lsvmd/lsvmd.o 00:03:07.019 TEST_HEADER include/spdk/accel.h 00:03:07.019 TEST_HEADER include/spdk/accel_module.h 00:03:07.019 TEST_HEADER include/spdk/assert.h 00:03:07.019 TEST_HEADER include/spdk/barrier.h 00:03:07.019 TEST_HEADER include/spdk/base64.h 00:03:07.019 TEST_HEADER include/spdk/bdev.h 00:03:07.019 TEST_HEADER include/spdk/bdev_module.h 00:03:07.019 TEST_HEADER include/spdk/bdev_zone.h 00:03:07.019 TEST_HEADER include/spdk/bit_array.h 00:03:07.019 TEST_HEADER include/spdk/bit_pool.h 00:03:07.019 CC examples/nvmf/nvmf/nvmf.o 00:03:07.019 TEST_HEADER include/spdk/blob_bdev.h 00:03:07.019 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:07.019 TEST_HEADER include/spdk/blobfs.h 00:03:07.019 TEST_HEADER include/spdk/blob.h 00:03:07.019 TEST_HEADER include/spdk/conf.h 00:03:07.019 TEST_HEADER include/spdk/config.h 00:03:07.019 TEST_HEADER include/spdk/cpuset.h 00:03:07.019 TEST_HEADER include/spdk/crc16.h 00:03:07.019 TEST_HEADER include/spdk/crc32.h 00:03:07.019 CC examples/util/zipf/zipf.o 00:03:07.019 TEST_HEADER include/spdk/crc64.h 00:03:07.019 TEST_HEADER include/spdk/dif.h 00:03:07.019 TEST_HEADER include/spdk/dma.h 00:03:07.019 TEST_HEADER include/spdk/endian.h 00:03:07.019 TEST_HEADER include/spdk/env_dpdk.h 00:03:07.019 TEST_HEADER include/spdk/env.h 00:03:07.019 TEST_HEADER include/spdk/event.h 00:03:07.019 TEST_HEADER include/spdk/fd_group.h 00:03:07.019 TEST_HEADER include/spdk/fd.h 00:03:07.019 TEST_HEADER include/spdk/file.h 00:03:07.019 TEST_HEADER include/spdk/ftl.h 00:03:07.019 TEST_HEADER include/spdk/gpt_spec.h 00:03:07.019 TEST_HEADER include/spdk/hexlify.h 00:03:07.019 TEST_HEADER include/spdk/histogram_data.h 00:03:07.019 TEST_HEADER include/spdk/idxd.h 00:03:07.019 TEST_HEADER include/spdk/idxd_spec.h 00:03:07.019 TEST_HEADER include/spdk/init.h 00:03:07.276 TEST_HEADER include/spdk/ioat.h 00:03:07.276 TEST_HEADER include/spdk/ioat_spec.h 00:03:07.276 TEST_HEADER include/spdk/iscsi_spec.h 00:03:07.276 TEST_HEADER include/spdk/json.h 00:03:07.276 TEST_HEADER include/spdk/jsonrpc.h 00:03:07.276 TEST_HEADER include/spdk/likely.h 00:03:07.276 TEST_HEADER include/spdk/log.h 00:03:07.276 TEST_HEADER include/spdk/lvol.h 00:03:07.276 TEST_HEADER include/spdk/memory.h 00:03:07.276 TEST_HEADER include/spdk/mmio.h 00:03:07.276 TEST_HEADER include/spdk/nbd.h 00:03:07.276 TEST_HEADER include/spdk/notify.h 00:03:07.276 TEST_HEADER include/spdk/nvme.h 00:03:07.276 TEST_HEADER include/spdk/nvme_intel.h 00:03:07.276 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:07.276 CC test/dma/test_dma/test_dma.o 00:03:07.276 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:07.276 TEST_HEADER include/spdk/nvme_spec.h 00:03:07.276 TEST_HEADER include/spdk/nvme_zns.h 00:03:07.276 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:07.276 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:07.276 TEST_HEADER include/spdk/nvmf.h 00:03:07.276 TEST_HEADER include/spdk/nvmf_spec.h 00:03:07.276 TEST_HEADER include/spdk/nvmf_transport.h 00:03:07.276 TEST_HEADER include/spdk/opal.h 00:03:07.276 TEST_HEADER include/spdk/opal_spec.h 00:03:07.276 TEST_HEADER include/spdk/pci_ids.h 00:03:07.276 TEST_HEADER include/spdk/pipe.h 00:03:07.276 TEST_HEADER include/spdk/queue.h 00:03:07.276 TEST_HEADER include/spdk/reduce.h 00:03:07.276 LINK lsvmd 00:03:07.276 TEST_HEADER include/spdk/rpc.h 00:03:07.276 TEST_HEADER include/spdk/scheduler.h 00:03:07.276 TEST_HEADER include/spdk/scsi.h 00:03:07.276 TEST_HEADER include/spdk/scsi_spec.h 00:03:07.276 TEST_HEADER include/spdk/sock.h 00:03:07.276 TEST_HEADER include/spdk/stdinc.h 00:03:07.276 TEST_HEADER include/spdk/string.h 00:03:07.276 TEST_HEADER include/spdk/thread.h 00:03:07.276 TEST_HEADER include/spdk/trace.h 00:03:07.276 TEST_HEADER include/spdk/trace_parser.h 00:03:07.276 CC app/spdk_nvme_identify/identify.o 00:03:07.276 TEST_HEADER include/spdk/tree.h 00:03:07.276 TEST_HEADER include/spdk/ublk.h 00:03:07.276 TEST_HEADER include/spdk/util.h 00:03:07.276 TEST_HEADER include/spdk/uuid.h 00:03:07.276 LINK hello_sock 00:03:07.276 TEST_HEADER include/spdk/version.h 00:03:07.276 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:07.276 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:07.276 TEST_HEADER include/spdk/vhost.h 00:03:07.276 TEST_HEADER include/spdk/vmd.h 00:03:07.276 TEST_HEADER include/spdk/xor.h 00:03:07.276 TEST_HEADER include/spdk/zipf.h 00:03:07.276 LINK hello_world 00:03:07.276 CXX test/cpp_headers/accel.o 00:03:07.276 LINK zipf 00:03:07.533 LINK nvmf 00:03:07.533 CC app/spdk_nvme_discover/discovery_aer.o 00:03:07.533 CXX test/cpp_headers/accel_module.o 00:03:07.533 CC examples/vmd/led/led.o 00:03:07.533 CC examples/nvme/reconnect/reconnect.o 00:03:07.790 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:07.790 CXX test/cpp_headers/assert.o 00:03:07.790 LINK spdk_nvme_discover 00:03:07.790 LINK led 00:03:07.790 LINK test_dma 00:03:07.790 LINK spdk_nvme_perf 00:03:08.047 CXX test/cpp_headers/barrier.o 00:03:08.047 CC examples/thread/thread/thread_ex.o 00:03:08.047 CXX test/cpp_headers/base64.o 00:03:08.047 CXX test/cpp_headers/bdev.o 00:03:08.047 CXX test/cpp_headers/bdev_module.o 00:03:08.047 CC app/spdk_top/spdk_top.o 00:03:08.305 LINK reconnect 00:03:08.305 LINK thread 00:03:08.305 CXX test/cpp_headers/bdev_zone.o 00:03:08.305 CC app/vhost/vhost.o 00:03:08.305 CC app/spdk_dd/spdk_dd.o 00:03:08.305 CXX test/cpp_headers/bit_array.o 00:03:08.305 CXX test/cpp_headers/bit_pool.o 00:03:08.562 LINK nvme_manage 00:03:08.562 CXX test/cpp_headers/blob_bdev.o 00:03:08.562 CXX test/cpp_headers/blobfs_bdev.o 00:03:08.562 LINK vhost 00:03:08.562 CXX test/cpp_headers/blobfs.o 00:03:08.562 CC app/fio/nvme/fio_plugin.o 00:03:08.562 CXX test/cpp_headers/blob.o 00:03:08.562 CC examples/nvme/arbitration/arbitration.o 00:03:08.819 LINK spdk_nvme_identify 00:03:08.819 LINK spdk_dd 00:03:08.819 CXX test/cpp_headers/conf.o 00:03:08.819 LINK iscsi_fuzz 00:03:08.819 CC examples/idxd/perf/perf.o 00:03:09.076 CXX test/cpp_headers/config.o 00:03:09.076 CC test/event/event_perf/event_perf.o 00:03:09.076 CC test/env/mem_callbacks/mem_callbacks.o 00:03:09.076 CXX test/cpp_headers/cpuset.o 00:03:09.076 CXX test/cpp_headers/crc16.o 00:03:09.076 LINK arbitration 00:03:09.076 CC app/fio/bdev/fio_plugin.o 00:03:09.076 LINK event_perf 00:03:09.334 CXX test/cpp_headers/crc32.o 00:03:09.334 CC examples/nvme/hotplug/hotplug.o 00:03:09.334 CC test/nvme/aer/aer.o 00:03:09.334 LINK idxd_perf 00:03:09.334 LINK spdk_nvme 00:03:09.334 LINK spdk_top 00:03:09.334 CC test/lvol/esnap/esnap.o 00:03:09.334 CC test/event/reactor/reactor.o 00:03:09.334 CXX test/cpp_headers/crc64.o 00:03:09.591 CC test/event/reactor_perf/reactor_perf.o 00:03:09.591 LINK reactor 00:03:09.591 CC test/env/vtophys/vtophys.o 00:03:09.591 LINK hotplug 00:03:09.591 CXX test/cpp_headers/dif.o 00:03:09.591 CC test/nvme/reset/reset.o 00:03:09.591 LINK mem_callbacks 00:03:09.591 LINK reactor_perf 00:03:09.592 LINK aer 00:03:09.849 LINK spdk_bdev 00:03:09.849 LINK vtophys 00:03:09.849 CC test/nvme/sgl/sgl.o 00:03:09.849 CXX test/cpp_headers/dma.o 00:03:09.849 CXX test/cpp_headers/endian.o 00:03:09.849 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:09.849 CC test/event/app_repeat/app_repeat.o 00:03:09.849 CC test/rpc_client/rpc_client_test.o 00:03:09.849 LINK reset 00:03:09.849 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:09.849 CXX test/cpp_headers/env_dpdk.o 00:03:10.106 CC test/env/memory/memory_ut.o 00:03:10.106 LINK cmb_copy 00:03:10.106 CC test/thread/poller_perf/poller_perf.o 00:03:10.106 LINK app_repeat 00:03:10.106 LINK sgl 00:03:10.106 LINK env_dpdk_post_init 00:03:10.106 CXX test/cpp_headers/env.o 00:03:10.106 LINK rpc_client_test 00:03:10.364 CC test/event/scheduler/scheduler.o 00:03:10.364 LINK poller_perf 00:03:10.364 CC examples/nvme/abort/abort.o 00:03:10.364 CXX test/cpp_headers/event.o 00:03:10.364 CC test/nvme/e2edp/nvme_dp.o 00:03:10.364 CC test/nvme/overhead/overhead.o 00:03:10.364 CC test/nvme/err_injection/err_injection.o 00:03:10.364 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:10.364 CXX test/cpp_headers/fd_group.o 00:03:10.364 LINK scheduler 00:03:10.621 LINK pmr_persistence 00:03:10.621 CC test/env/pci/pci_ut.o 00:03:10.621 LINK err_injection 00:03:10.621 CXX test/cpp_headers/fd.o 00:03:10.621 LINK nvme_dp 00:03:10.621 CXX test/cpp_headers/file.o 00:03:10.621 LINK overhead 00:03:10.879 CXX test/cpp_headers/ftl.o 00:03:10.880 LINK abort 00:03:10.880 CC test/nvme/startup/startup.o 00:03:10.880 CC test/nvme/reserve/reserve.o 00:03:10.880 CXX test/cpp_headers/gpt_spec.o 00:03:10.880 CC test/nvme/simple_copy/simple_copy.o 00:03:10.880 CXX test/cpp_headers/hexlify.o 00:03:10.880 CC test/nvme/connect_stress/connect_stress.o 00:03:11.138 LINK startup 00:03:11.138 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:11.138 CXX test/cpp_headers/histogram_data.o 00:03:11.138 LINK pci_ut 00:03:11.138 LINK reserve 00:03:11.138 LINK memory_ut 00:03:11.138 CXX test/cpp_headers/idxd.o 00:03:11.138 LINK connect_stress 00:03:11.138 LINK simple_copy 00:03:11.138 CXX test/cpp_headers/idxd_spec.o 00:03:11.138 LINK interrupt_tgt 00:03:11.396 CC test/nvme/boot_partition/boot_partition.o 00:03:11.396 CXX test/cpp_headers/init.o 00:03:11.396 CC test/nvme/compliance/nvme_compliance.o 00:03:11.396 CXX test/cpp_headers/ioat.o 00:03:11.396 CC test/nvme/fused_ordering/fused_ordering.o 00:03:11.396 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:11.396 CXX test/cpp_headers/ioat_spec.o 00:03:11.396 CC test/nvme/fdp/fdp.o 00:03:11.396 CC test/nvme/cuse/cuse.o 00:03:11.396 LINK boot_partition 00:03:11.396 CXX test/cpp_headers/iscsi_spec.o 00:03:11.654 CXX test/cpp_headers/json.o 00:03:11.654 CXX test/cpp_headers/jsonrpc.o 00:03:11.654 LINK doorbell_aers 00:03:11.654 LINK fused_ordering 00:03:11.654 CXX test/cpp_headers/likely.o 00:03:11.654 CXX test/cpp_headers/log.o 00:03:11.654 LINK nvme_compliance 00:03:11.654 CXX test/cpp_headers/lvol.o 00:03:11.654 CXX test/cpp_headers/memory.o 00:03:11.912 CXX test/cpp_headers/mmio.o 00:03:11.912 CXX test/cpp_headers/nbd.o 00:03:11.912 LINK fdp 00:03:11.912 CXX test/cpp_headers/notify.o 00:03:11.912 CXX test/cpp_headers/nvme.o 00:03:11.912 CXX test/cpp_headers/nvme_intel.o 00:03:11.912 CXX test/cpp_headers/nvme_ocssd.o 00:03:11.912 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:11.912 CXX test/cpp_headers/nvme_spec.o 00:03:11.912 CXX test/cpp_headers/nvme_zns.o 00:03:11.912 CXX test/cpp_headers/nvmf_cmd.o 00:03:11.912 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:11.912 CXX test/cpp_headers/nvmf.o 00:03:11.912 CXX test/cpp_headers/nvmf_spec.o 00:03:12.171 CXX test/cpp_headers/nvmf_transport.o 00:03:12.171 CXX test/cpp_headers/opal.o 00:03:12.171 CXX test/cpp_headers/opal_spec.o 00:03:12.171 CXX test/cpp_headers/pci_ids.o 00:03:12.171 CXX test/cpp_headers/pipe.o 00:03:12.171 CXX test/cpp_headers/queue.o 00:03:12.171 CXX test/cpp_headers/reduce.o 00:03:12.171 CXX test/cpp_headers/rpc.o 00:03:12.171 CXX test/cpp_headers/scheduler.o 00:03:12.429 CXX test/cpp_headers/scsi.o 00:03:12.429 CXX test/cpp_headers/scsi_spec.o 00:03:12.429 CXX test/cpp_headers/sock.o 00:03:12.429 CXX test/cpp_headers/stdinc.o 00:03:12.429 CXX test/cpp_headers/string.o 00:03:12.429 CXX test/cpp_headers/thread.o 00:03:12.429 CXX test/cpp_headers/trace.o 00:03:12.429 CXX test/cpp_headers/trace_parser.o 00:03:12.429 CXX test/cpp_headers/tree.o 00:03:12.429 CXX test/cpp_headers/ublk.o 00:03:12.429 CXX test/cpp_headers/util.o 00:03:12.429 CXX test/cpp_headers/uuid.o 00:03:12.429 CXX test/cpp_headers/version.o 00:03:12.687 CXX test/cpp_headers/vfio_user_pci.o 00:03:12.687 CXX test/cpp_headers/vfio_user_spec.o 00:03:12.687 CXX test/cpp_headers/vhost.o 00:03:12.687 CXX test/cpp_headers/vmd.o 00:03:12.687 CXX test/cpp_headers/xor.o 00:03:12.687 LINK cuse 00:03:12.687 CXX test/cpp_headers/zipf.o 00:03:15.995 LINK esnap 00:03:16.254 00:03:16.254 real 1m21.027s 00:03:16.254 user 8m18.453s 00:03:16.254 sys 1m35.639s 00:03:16.254 19:03:53 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:16.254 19:03:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.254 ************************************ 00:03:16.254 END TEST make 00:03:16.254 ************************************ 00:03:16.513 19:03:53 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:16.513 19:03:53 -- nvmf/common.sh@7 -- # uname -s 00:03:16.513 19:03:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:16.513 19:03:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:16.513 19:03:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:16.513 19:03:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:16.513 19:03:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:16.513 19:03:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:16.513 19:03:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:16.513 19:03:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:16.513 19:03:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:16.513 19:03:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:16.513 19:03:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1a5ec1fb-7411-49ca-a93a-15d5d1607752 00:03:16.513 19:03:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=1a5ec1fb-7411-49ca-a93a-15d5d1607752 00:03:16.513 19:03:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:16.513 19:03:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:16.513 19:03:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:16.513 19:03:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:16.513 19:03:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:16.513 19:03:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.513 19:03:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.513 19:03:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.513 19:03:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.513 19:03:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.513 19:03:53 -- paths/export.sh@5 -- # export PATH 00:03:16.513 19:03:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.513 19:03:53 -- nvmf/common.sh@46 -- # : 0 00:03:16.513 19:03:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:16.513 19:03:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:16.513 19:03:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:16.513 19:03:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:16.513 19:03:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:16.513 19:03:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:16.513 19:03:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:16.513 19:03:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:16.513 19:03:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:16.513 19:03:53 -- spdk/autotest.sh@32 -- # uname -s 00:03:16.513 19:03:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:16.513 19:03:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:16.513 19:03:53 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:16.513 19:03:53 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:16.513 19:03:53 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:16.513 19:03:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:16.513 19:03:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:16.513 19:03:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:16.513 19:03:53 -- spdk/autotest.sh@48 -- # udevadm_pid=48340 00:03:16.513 19:03:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:16.513 19:03:53 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:16.513 19:03:53 -- spdk/autotest.sh@54 -- # echo 48360 00:03:16.513 19:03:53 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:16.513 19:03:53 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:16.513 19:03:53 -- spdk/autotest.sh@56 -- # echo 48361 00:03:16.513 19:03:53 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:16.513 19:03:53 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:16.513 19:03:53 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:16.513 19:03:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:16.513 19:03:53 -- common/autotest_common.sh@10 -- # set +x 00:03:16.513 19:03:53 -- spdk/autotest.sh@70 -- # create_test_list 00:03:16.513 19:03:53 -- common/autotest_common.sh@734 -- # xtrace_disable 00:03:16.513 19:03:53 -- common/autotest_common.sh@10 -- # set +x 00:03:16.513 19:03:53 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:16.513 19:03:53 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:16.513 19:03:53 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:16.513 19:03:53 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:16.513 19:03:53 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:16.513 19:03:53 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:16.513 19:03:53 -- common/autotest_common.sh@1438 -- # uname 00:03:16.513 19:03:53 -- common/autotest_common.sh@1438 -- # '[' Linux = FreeBSD ']' 00:03:16.513 19:03:53 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:16.513 19:03:53 -- common/autotest_common.sh@1458 -- # uname 00:03:16.513 19:03:53 -- common/autotest_common.sh@1458 -- # [[ Linux = FreeBSD ]] 00:03:16.513 19:03:53 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:16.513 19:03:53 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:16.513 19:03:53 -- spdk/autotest.sh@83 -- # hash lcov 00:03:16.513 19:03:53 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:16.513 19:03:53 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:16.513 --rc lcov_branch_coverage=1 00:03:16.513 --rc lcov_function_coverage=1 00:03:16.513 --rc genhtml_branch_coverage=1 00:03:16.513 --rc genhtml_function_coverage=1 00:03:16.513 --rc genhtml_legend=1 00:03:16.514 --rc geninfo_all_blocks=1 00:03:16.514 ' 00:03:16.514 19:03:53 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:16.514 --rc lcov_branch_coverage=1 00:03:16.514 --rc lcov_function_coverage=1 00:03:16.514 --rc genhtml_branch_coverage=1 00:03:16.514 --rc genhtml_function_coverage=1 00:03:16.514 --rc genhtml_legend=1 00:03:16.514 --rc geninfo_all_blocks=1 00:03:16.514 ' 00:03:16.514 19:03:53 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:16.514 --rc lcov_branch_coverage=1 00:03:16.514 --rc lcov_function_coverage=1 00:03:16.514 --rc genhtml_branch_coverage=1 00:03:16.514 --rc genhtml_function_coverage=1 00:03:16.514 --rc genhtml_legend=1 00:03:16.514 --rc geninfo_all_blocks=1 00:03:16.514 --no-external' 00:03:16.514 19:03:53 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:16.514 --rc lcov_branch_coverage=1 00:03:16.514 --rc lcov_function_coverage=1 00:03:16.514 --rc genhtml_branch_coverage=1 00:03:16.514 --rc genhtml_function_coverage=1 00:03:16.514 --rc genhtml_legend=1 00:03:16.514 --rc geninfo_all_blocks=1 00:03:16.514 --no-external' 00:03:16.514 19:03:53 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:16.772 lcov: LCOV version 1.14 00:03:16.772 19:03:53 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:26.742 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:26.742 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:26.742 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:44.821 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:44.821 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:44.821 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:44.821 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:44.821 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:44.821 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:44.821 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:44.821 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:44.821 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:44.821 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:44.821 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:44.821 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:44.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:44.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:44.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:44.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:48.104 19:04:24 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:48.104 19:04:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:48.104 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:48.104 19:04:24 -- spdk/autotest.sh@102 -- # rm -f 00:03:48.104 19:04:24 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:48.362 lsblk: /dev/nvme3c3n1: not a block device 00:03:48.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.621 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:48.621 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:48.621 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:48.621 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:48.880 19:04:26 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:48.880 19:04:26 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:03:48.880 19:04:26 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:03:48.880 19:04:26 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:03:48.880 19:04:26 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:48.880 19:04:26 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:03:48.880 19:04:26 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:03:48.880 19:04:26 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:48.880 19:04:26 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:03:48.880 19:04:26 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:03:48.880 19:04:26 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:48.880 19:04:26 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme2n1 00:03:48.880 19:04:26 -- common/autotest_common.sh@1645 -- # local device=nvme2n1 00:03:48.880 19:04:26 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:48.880 19:04:26 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme2n2 00:03:48.880 19:04:26 -- common/autotest_common.sh@1645 -- # local device=nvme2n2 00:03:48.880 19:04:26 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:48.880 19:04:26 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme2n3 00:03:48.880 19:04:26 -- common/autotest_common.sh@1645 -- # local device=nvme2n3 00:03:48.880 19:04:26 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:48.880 19:04:26 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme3c3n1 00:03:48.880 19:04:26 -- common/autotest_common.sh@1645 -- # local device=nvme3c3n1 00:03:48.880 19:04:26 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:48.880 19:04:26 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme3n1 00:03:48.880 19:04:26 -- common/autotest_common.sh@1645 -- # local device=nvme3n1 00:03:48.880 19:04:26 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:48.880 19:04:26 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:48.880 19:04:26 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:48.881 19:04:26 -- spdk/autotest.sh@121 -- # grep -v p 00:03:48.881 19:04:26 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:03:48.881 19:04:26 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:48.881 19:04:26 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:48.881 19:04:26 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:48.881 19:04:26 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:48.881 19:04:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:48.881 No valid GPT data, bailing 00:03:48.881 19:04:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:48.881 19:04:26 -- scripts/common.sh@393 -- # pt= 00:03:48.881 19:04:26 -- scripts/common.sh@394 -- # return 1 00:03:48.881 19:04:26 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:48.881 1+0 records in 00:03:48.881 1+0 records out 00:03:48.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131041 s, 80.0 MB/s 00:03:48.881 19:04:26 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:48.881 19:04:26 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:48.881 19:04:26 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:03:48.881 19:04:26 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:48.881 19:04:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:48.881 No valid GPT data, bailing 00:03:48.881 19:04:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:48.881 19:04:26 -- scripts/common.sh@393 -- # pt= 00:03:48.881 19:04:26 -- scripts/common.sh@394 -- # return 1 00:03:48.881 19:04:26 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:48.881 1+0 records in 00:03:48.881 1+0 records out 00:03:48.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470626 s, 223 MB/s 00:03:48.881 19:04:26 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:48.881 19:04:26 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:48.881 19:04:26 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n1 00:03:48.881 19:04:26 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:48.881 19:04:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:48.881 No valid GPT data, bailing 00:03:49.140 19:04:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:49.140 19:04:26 -- scripts/common.sh@393 -- # pt= 00:03:49.140 19:04:26 -- scripts/common.sh@394 -- # return 1 00:03:49.140 19:04:26 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:49.140 1+0 records in 00:03:49.140 1+0 records out 00:03:49.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467752 s, 224 MB/s 00:03:49.140 19:04:26 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:49.140 19:04:26 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:49.140 19:04:26 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n2 00:03:49.140 19:04:26 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:03:49.140 19:04:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:49.140 No valid GPT data, bailing 00:03:49.140 19:04:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:49.140 19:04:26 -- scripts/common.sh@393 -- # pt= 00:03:49.140 19:04:26 -- scripts/common.sh@394 -- # return 1 00:03:49.140 19:04:26 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:49.140 1+0 records in 00:03:49.140 1+0 records out 00:03:49.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048576 s, 216 MB/s 00:03:49.140 19:04:26 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:49.140 19:04:26 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:49.140 19:04:26 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n3 00:03:49.140 19:04:26 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:03:49.140 19:04:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:49.140 No valid GPT data, bailing 00:03:49.140 19:04:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:49.140 19:04:26 -- scripts/common.sh@393 -- # pt= 00:03:49.140 19:04:26 -- scripts/common.sh@394 -- # return 1 00:03:49.140 19:04:26 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:49.140 1+0 records in 00:03:49.140 1+0 records out 00:03:49.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00343612 s, 305 MB/s 00:03:49.140 19:04:26 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:49.140 19:04:26 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:49.140 19:04:26 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme3n1 00:03:49.140 19:04:26 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:49.140 19:04:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:49.140 No valid GPT data, bailing 00:03:49.140 19:04:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:49.140 19:04:26 -- scripts/common.sh@393 -- # pt= 00:03:49.140 19:04:26 -- scripts/common.sh@394 -- # return 1 00:03:49.140 19:04:26 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:49.399 1+0 records in 00:03:49.399 1+0 records out 00:03:49.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503638 s, 208 MB/s 00:03:49.399 19:04:26 -- spdk/autotest.sh@129 -- # sync 00:03:49.399 19:04:26 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:49.399 19:04:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:49.399 19:04:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:51.300 19:04:28 -- spdk/autotest.sh@135 -- # uname -s 00:03:51.300 19:04:28 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:51.300 19:04:28 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:51.300 19:04:28 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:51.300 19:04:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:51.300 19:04:28 -- common/autotest_common.sh@10 -- # set +x 00:03:51.300 ************************************ 00:03:51.300 START TEST setup.sh 00:03:51.300 ************************************ 00:03:51.301 19:04:28 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:51.301 * Looking for test storage... 00:03:51.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:51.301 19:04:28 -- setup/test-setup.sh@10 -- # uname -s 00:03:51.301 19:04:28 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:51.301 19:04:28 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:51.301 19:04:28 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:51.301 19:04:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:51.301 19:04:28 -- common/autotest_common.sh@10 -- # set +x 00:03:51.301 ************************************ 00:03:51.301 START TEST acl 00:03:51.301 ************************************ 00:03:51.301 19:04:28 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:51.301 * Looking for test storage... 00:03:51.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:51.301 19:04:28 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:51.301 19:04:28 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:03:51.301 19:04:28 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:03:51.301 19:04:28 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:03:51.301 19:04:28 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:51.301 19:04:28 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:03:51.301 19:04:28 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:03:51.301 19:04:28 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:51.301 19:04:28 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:03:51.301 19:04:28 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:03:51.301 19:04:28 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:51.301 19:04:28 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme2n1 00:03:51.301 19:04:28 -- common/autotest_common.sh@1645 -- # local device=nvme2n1 00:03:51.301 19:04:28 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:51.301 19:04:28 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme2n2 00:03:51.301 19:04:28 -- common/autotest_common.sh@1645 -- # local device=nvme2n2 00:03:51.301 19:04:28 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:51.301 19:04:28 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme2n3 00:03:51.301 19:04:28 -- common/autotest_common.sh@1645 -- # local device=nvme2n3 00:03:51.301 19:04:28 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:51.301 19:04:28 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme3c3n1 00:03:51.301 19:04:28 -- common/autotest_common.sh@1645 -- # local device=nvme3c3n1 00:03:51.301 19:04:28 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:51.301 19:04:28 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme3n1 00:03:51.301 19:04:28 -- common/autotest_common.sh@1645 -- # local device=nvme3n1 00:03:51.301 19:04:28 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:51.301 19:04:28 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:51.301 19:04:28 -- setup/acl.sh@12 -- # devs=() 00:03:51.301 19:04:28 -- setup/acl.sh@12 -- # declare -a devs 00:03:51.301 19:04:28 -- setup/acl.sh@13 -- # drivers=() 00:03:51.301 19:04:28 -- setup/acl.sh@13 -- # declare -A drivers 00:03:51.301 19:04:28 -- setup/acl.sh@51 -- # setup reset 00:03:51.301 19:04:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.301 19:04:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:52.678 19:04:29 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:52.678 19:04:29 -- setup/acl.sh@16 -- # local dev driver 00:03:52.678 19:04:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.678 19:04:29 -- setup/acl.sh@15 -- # setup output status 00:03:52.678 19:04:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.678 19:04:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:52.678 Hugepages 00:03:52.678 node hugesize free / total 00:03:52.678 19:04:29 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:52.678 19:04:29 -- setup/acl.sh@19 -- # continue 00:03:52.678 19:04:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.678 00:03:52.678 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:52.678 19:04:29 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:52.678 19:04:29 -- setup/acl.sh@19 -- # continue 00:03:52.678 19:04:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.678 19:04:30 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:52.678 19:04:30 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:52.678 19:04:30 -- setup/acl.sh@20 -- # continue 00:03:52.678 19:04:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.938 19:04:30 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:52.938 19:04:30 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:52.938 19:04:30 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:52.938 19:04:30 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:52.938 19:04:30 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:52.938 19:04:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.938 19:04:30 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:52.938 19:04:30 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:52.938 19:04:30 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:52.938 19:04:30 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:52.938 19:04:30 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:52.938 19:04:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:52.938 19:04:30 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:52.938 19:04:30 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:52.938 19:04:30 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:52.938 19:04:30 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:52.938 19:04:30 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:52.938 19:04:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.197 19:04:30 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:53.197 19:04:30 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:53.197 19:04:30 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:53.197 19:04:30 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:53.197 19:04:30 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:53.197 19:04:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.197 19:04:30 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:53.197 19:04:30 -- setup/acl.sh@54 -- # run_test denied denied 00:03:53.197 19:04:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:53.197 19:04:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:53.197 19:04:30 -- common/autotest_common.sh@10 -- # set +x 00:03:53.197 ************************************ 00:03:53.197 START TEST denied 00:03:53.197 ************************************ 00:03:53.197 19:04:30 -- common/autotest_common.sh@1102 -- # denied 00:03:53.197 19:04:30 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:53.197 19:04:30 -- setup/acl.sh@38 -- # setup output config 00:03:53.197 19:04:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.197 19:04:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.197 19:04:30 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:54.134 lsblk: /dev/nvme3c3n1: not a block device 00:03:54.404 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:54.404 19:04:31 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:54.404 19:04:31 -- setup/acl.sh@28 -- # local dev driver 00:03:54.404 19:04:31 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:54.404 19:04:31 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:54.404 19:04:31 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:54.404 19:04:31 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:54.404 19:04:31 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:54.404 19:04:31 -- setup/acl.sh@41 -- # setup reset 00:03:54.404 19:04:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.404 19:04:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.969 00:04:00.969 real 0m7.249s 00:04:00.969 user 0m0.898s 00:04:00.969 sys 0m1.431s 00:04:00.969 19:04:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:00.969 ************************************ 00:04:00.969 END TEST denied 00:04:00.969 ************************************ 00:04:00.969 19:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:00.969 19:04:37 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:00.969 19:04:37 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:00.969 19:04:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:00.969 19:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:00.969 ************************************ 00:04:00.969 START TEST allowed 00:04:00.969 ************************************ 00:04:00.969 19:04:37 -- common/autotest_common.sh@1102 -- # allowed 00:04:00.969 19:04:37 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:00.969 19:04:37 -- setup/acl.sh@45 -- # setup output config 00:04:00.969 19:04:37 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:00.969 19:04:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.969 19:04:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.228 lsblk: /dev/nvme1c1n1: not a block device 00:04:01.486 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.486 19:04:38 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:01.486 19:04:38 -- setup/acl.sh@28 -- # local dev driver 00:04:01.486 19:04:38 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:01.486 19:04:38 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:01.486 19:04:38 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:01.486 19:04:38 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:01.486 19:04:38 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:01.486 19:04:38 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:01.486 19:04:38 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:01.486 19:04:38 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:01.486 19:04:38 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:01.486 19:04:38 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:01.486 19:04:38 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:01.486 19:04:38 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:01.486 19:04:38 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:01.486 19:04:38 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:01.487 19:04:38 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:01.487 19:04:38 -- setup/acl.sh@48 -- # setup reset 00:04:01.487 19:04:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.487 19:04:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.866 ************************************ 00:04:02.866 END TEST allowed 00:04:02.866 ************************************ 00:04:02.866 00:04:02.866 real 0m2.371s 00:04:02.866 user 0m1.076s 00:04:02.866 sys 0m1.304s 00:04:02.866 19:04:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.866 19:04:40 -- common/autotest_common.sh@10 -- # set +x 00:04:02.866 ************************************ 00:04:02.866 END TEST acl 00:04:02.866 ************************************ 00:04:02.866 00:04:02.866 real 0m11.622s 00:04:02.866 user 0m2.844s 00:04:02.866 sys 0m3.895s 00:04:02.866 19:04:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.866 19:04:40 -- common/autotest_common.sh@10 -- # set +x 00:04:02.866 19:04:40 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:02.866 19:04:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:02.866 19:04:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:02.866 19:04:40 -- common/autotest_common.sh@10 -- # set +x 00:04:02.866 ************************************ 00:04:02.866 START TEST hugepages 00:04:02.866 ************************************ 00:04:02.866 19:04:40 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:02.866 * Looking for test storage... 00:04:02.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:02.866 19:04:40 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:02.866 19:04:40 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:02.866 19:04:40 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:02.866 19:04:40 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:02.866 19:04:40 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:02.866 19:04:40 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:02.866 19:04:40 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:02.866 19:04:40 -- setup/common.sh@18 -- # local node= 00:04:02.866 19:04:40 -- setup/common.sh@19 -- # local var val 00:04:02.866 19:04:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.866 19:04:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.866 19:04:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.866 19:04:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.866 19:04:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.866 19:04:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.866 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5467788 kB' 'MemAvailable: 7424992 kB' 'Buffers: 2436 kB' 'Cached: 2169280 kB' 'SwapCached: 0 kB' 'Active: 837564 kB' 'Inactive: 1434816 kB' 'Active(anon): 111176 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 102384 kB' 'Mapped: 48832 kB' 'Shmem: 10512 kB' 'KReclaimable: 65876 kB' 'Slab: 143656 kB' 'SReclaimable: 65876 kB' 'SUnreclaim: 77780 kB' 'KernelStack: 6184 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 326528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 19:04:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # continue 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 19:04:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 19:04:40 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.868 19:04:40 -- setup/common.sh@33 -- # echo 2048 00:04:02.868 19:04:40 -- setup/common.sh@33 -- # return 0 00:04:02.868 19:04:40 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:02.868 19:04:40 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:02.868 19:04:40 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:02.868 19:04:40 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:02.868 19:04:40 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:02.868 19:04:40 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:02.868 19:04:40 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:02.868 19:04:40 -- setup/hugepages.sh@207 -- # get_nodes 00:04:02.868 19:04:40 -- setup/hugepages.sh@27 -- # local node 00:04:02.868 19:04:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.868 19:04:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:02.868 19:04:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.868 19:04:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.868 19:04:40 -- setup/hugepages.sh@208 -- # clear_hp 00:04:02.868 19:04:40 -- setup/hugepages.sh@37 -- # local node hp 00:04:02.868 19:04:40 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.868 19:04:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.868 19:04:40 -- setup/hugepages.sh@41 -- # echo 0 00:04:02.868 19:04:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.868 19:04:40 -- setup/hugepages.sh@41 -- # echo 0 00:04:02.868 19:04:40 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:02.868 19:04:40 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:02.868 19:04:40 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:02.868 19:04:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:02.868 19:04:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:02.868 19:04:40 -- common/autotest_common.sh@10 -- # set +x 00:04:02.868 ************************************ 00:04:02.868 START TEST default_setup 00:04:02.868 ************************************ 00:04:02.868 19:04:40 -- common/autotest_common.sh@1102 -- # default_setup 00:04:02.868 19:04:40 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:02.868 19:04:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.868 19:04:40 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.868 19:04:40 -- setup/hugepages.sh@51 -- # shift 00:04:02.868 19:04:40 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.868 19:04:40 -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.868 19:04:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.868 19:04:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.868 19:04:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.868 19:04:40 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.868 19:04:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.868 19:04:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.868 19:04:40 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.868 19:04:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.868 19:04:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.868 19:04:40 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.868 19:04:40 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.868 19:04:40 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.868 19:04:40 -- setup/hugepages.sh@73 -- # return 0 00:04:02.868 19:04:40 -- setup/hugepages.sh@137 -- # setup output 00:04:02.868 19:04:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.868 19:04:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.805 lsblk: /dev/nvme1c1n1: not a block device 00:04:04.064 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.064 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.064 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.327 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.327 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.327 19:04:41 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:04.327 19:04:41 -- setup/hugepages.sh@89 -- # local node 00:04:04.327 19:04:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.327 19:04:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.327 19:04:41 -- setup/hugepages.sh@92 -- # local surp 00:04:04.327 19:04:41 -- setup/hugepages.sh@93 -- # local resv 00:04:04.327 19:04:41 -- setup/hugepages.sh@94 -- # local anon 00:04:04.327 19:04:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.327 19:04:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.327 19:04:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.327 19:04:41 -- setup/common.sh@18 -- # local node= 00:04:04.327 19:04:41 -- setup/common.sh@19 -- # local var val 00:04:04.327 19:04:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.327 19:04:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.327 19:04:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.327 19:04:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.327 19:04:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.327 19:04:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7489844 kB' 'MemAvailable: 9446796 kB' 'Buffers: 2436 kB' 'Cached: 2169264 kB' 'SwapCached: 0 kB' 'Active: 854924 kB' 'Inactive: 1434828 kB' 'Active(anon): 128536 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119828 kB' 'Mapped: 49104 kB' 'Shmem: 10472 kB' 'KReclaimable: 65348 kB' 'Slab: 142796 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77448 kB' 'KernelStack: 6304 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.327 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.327 19:04:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.328 19:04:41 -- setup/common.sh@33 -- # echo 0 00:04:04.328 19:04:41 -- setup/common.sh@33 -- # return 0 00:04:04.328 19:04:41 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.328 19:04:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.328 19:04:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.328 19:04:41 -- setup/common.sh@18 -- # local node= 00:04:04.328 19:04:41 -- setup/common.sh@19 -- # local var val 00:04:04.328 19:04:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.328 19:04:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.328 19:04:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.328 19:04:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.328 19:04:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.328 19:04:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7489844 kB' 'MemAvailable: 9446796 kB' 'Buffers: 2436 kB' 'Cached: 2169264 kB' 'SwapCached: 0 kB' 'Active: 854728 kB' 'Inactive: 1434828 kB' 'Active(anon): 128340 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119616 kB' 'Mapped: 48972 kB' 'Shmem: 10472 kB' 'KReclaimable: 65348 kB' 'Slab: 142768 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77420 kB' 'KernelStack: 6288 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.328 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.328 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.329 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.329 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.330 19:04:41 -- setup/common.sh@33 -- # echo 0 00:04:04.330 19:04:41 -- setup/common.sh@33 -- # return 0 00:04:04.330 19:04:41 -- setup/hugepages.sh@99 -- # surp=0 00:04:04.330 19:04:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.330 19:04:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.330 19:04:41 -- setup/common.sh@18 -- # local node= 00:04:04.330 19:04:41 -- setup/common.sh@19 -- # local var val 00:04:04.330 19:04:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.330 19:04:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.330 19:04:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.330 19:04:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.330 19:04:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.330 19:04:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7489596 kB' 'MemAvailable: 9446564 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854344 kB' 'Inactive: 1434844 kB' 'Active(anon): 127956 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119156 kB' 'Mapped: 48772 kB' 'Shmem: 10472 kB' 'KReclaimable: 65348 kB' 'Slab: 142784 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77436 kB' 'KernelStack: 6224 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.330 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.330 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.331 19:04:41 -- setup/common.sh@33 -- # echo 0 00:04:04.331 19:04:41 -- setup/common.sh@33 -- # return 0 00:04:04.331 19:04:41 -- setup/hugepages.sh@100 -- # resv=0 00:04:04.331 nr_hugepages=1024 00:04:04.331 19:04:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.331 resv_hugepages=0 00:04:04.331 19:04:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.331 surplus_hugepages=0 00:04:04.331 19:04:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.331 anon_hugepages=0 00:04:04.331 19:04:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.331 19:04:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.331 19:04:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.331 19:04:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.331 19:04:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.331 19:04:41 -- setup/common.sh@18 -- # local node= 00:04:04.331 19:04:41 -- setup/common.sh@19 -- # local var val 00:04:04.331 19:04:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.331 19:04:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.331 19:04:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.331 19:04:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.331 19:04:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.331 19:04:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7489596 kB' 'MemAvailable: 9446564 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854344 kB' 'Inactive: 1434844 kB' 'Active(anon): 127956 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119156 kB' 'Mapped: 48772 kB' 'Shmem: 10472 kB' 'KReclaimable: 65348 kB' 'Slab: 142784 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77436 kB' 'KernelStack: 6224 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.331 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.331 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.332 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.332 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.333 19:04:41 -- setup/common.sh@33 -- # echo 1024 00:04:04.333 19:04:41 -- setup/common.sh@33 -- # return 0 00:04:04.333 19:04:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.333 19:04:41 -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.333 19:04:41 -- setup/hugepages.sh@27 -- # local node 00:04:04.333 19:04:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.333 19:04:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.333 19:04:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.333 19:04:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.333 19:04:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.333 19:04:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.333 19:04:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.333 19:04:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.333 19:04:41 -- setup/common.sh@18 -- # local node=0 00:04:04.333 19:04:41 -- setup/common.sh@19 -- # local var val 00:04:04.333 19:04:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.333 19:04:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.333 19:04:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.333 19:04:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.333 19:04:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.333 19:04:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7489856 kB' 'MemUsed: 4752116 kB' 'SwapCached: 0 kB' 'Active: 854344 kB' 'Inactive: 1434844 kB' 'Active(anon): 127956 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2171704 kB' 'Mapped: 48772 kB' 'AnonPages: 119156 kB' 'Shmem: 10472 kB' 'KernelStack: 6224 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65348 kB' 'Slab: 142784 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.333 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.333 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # continue 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.334 19:04:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.334 19:04:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.334 19:04:41 -- setup/common.sh@33 -- # echo 0 00:04:04.334 19:04:41 -- setup/common.sh@33 -- # return 0 00:04:04.334 19:04:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.334 19:04:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.334 19:04:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.334 19:04:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.334 node0=1024 expecting 1024 00:04:04.334 19:04:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.334 19:04:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.334 00:04:04.334 real 0m1.470s 00:04:04.334 user 0m0.666s 00:04:04.334 sys 0m0.786s 00:04:04.334 19:04:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:04.334 19:04:41 -- common/autotest_common.sh@10 -- # set +x 00:04:04.334 ************************************ 00:04:04.334 END TEST default_setup 00:04:04.334 ************************************ 00:04:04.594 19:04:41 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:04.594 19:04:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:04.594 19:04:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:04.594 19:04:41 -- common/autotest_common.sh@10 -- # set +x 00:04:04.594 ************************************ 00:04:04.594 START TEST per_node_1G_alloc 00:04:04.594 ************************************ 00:04:04.594 19:04:41 -- common/autotest_common.sh@1102 -- # per_node_1G_alloc 00:04:04.594 19:04:41 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:04.594 19:04:41 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:04.594 19:04:41 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:04.594 19:04:41 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:04.594 19:04:41 -- setup/hugepages.sh@51 -- # shift 00:04:04.594 19:04:41 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:04.594 19:04:41 -- setup/hugepages.sh@52 -- # local node_ids 00:04:04.594 19:04:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.594 19:04:41 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:04.594 19:04:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:04.594 19:04:41 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:04.594 19:04:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.594 19:04:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.594 19:04:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.594 19:04:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.594 19:04:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.594 19:04:41 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:04.594 19:04:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.594 19:04:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:04.594 19:04:41 -- setup/hugepages.sh@73 -- # return 0 00:04:04.594 19:04:41 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:04.594 19:04:41 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:04.594 19:04:41 -- setup/hugepages.sh@146 -- # setup output 00:04:04.594 19:04:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.594 19:04:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.115 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.115 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.115 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.115 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.115 19:04:42 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:05.115 19:04:42 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:05.115 19:04:42 -- setup/hugepages.sh@89 -- # local node 00:04:05.115 19:04:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.115 19:04:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.115 19:04:42 -- setup/hugepages.sh@92 -- # local surp 00:04:05.115 19:04:42 -- setup/hugepages.sh@93 -- # local resv 00:04:05.115 19:04:42 -- setup/hugepages.sh@94 -- # local anon 00:04:05.115 19:04:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.115 19:04:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.115 19:04:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.115 19:04:42 -- setup/common.sh@18 -- # local node= 00:04:05.115 19:04:42 -- setup/common.sh@19 -- # local var val 00:04:05.115 19:04:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.115 19:04:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.115 19:04:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.115 19:04:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.115 19:04:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.115 19:04:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.115 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.115 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8547592 kB' 'MemAvailable: 10504560 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854564 kB' 'Inactive: 1434844 kB' 'Active(anon): 128176 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119348 kB' 'Mapped: 48968 kB' 'Shmem: 10472 kB' 'KReclaimable: 65348 kB' 'Slab: 142752 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77404 kB' 'KernelStack: 6256 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.116 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.116 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.117 19:04:42 -- setup/common.sh@33 -- # echo 0 00:04:05.117 19:04:42 -- setup/common.sh@33 -- # return 0 00:04:05.117 19:04:42 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.117 19:04:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.117 19:04:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.117 19:04:42 -- setup/common.sh@18 -- # local node= 00:04:05.117 19:04:42 -- setup/common.sh@19 -- # local var val 00:04:05.117 19:04:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.117 19:04:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.117 19:04:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.117 19:04:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.117 19:04:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.117 19:04:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8547344 kB' 'MemAvailable: 10504312 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854600 kB' 'Inactive: 1434844 kB' 'Active(anon): 128212 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119388 kB' 'Mapped: 48844 kB' 'Shmem: 10472 kB' 'KReclaimable: 65348 kB' 'Slab: 142836 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77488 kB' 'KernelStack: 6228 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.117 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.117 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.118 19:04:42 -- setup/common.sh@33 -- # echo 0 00:04:05.118 19:04:42 -- setup/common.sh@33 -- # return 0 00:04:05.118 19:04:42 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.118 19:04:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.118 19:04:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.118 19:04:42 -- setup/common.sh@18 -- # local node= 00:04:05.118 19:04:42 -- setup/common.sh@19 -- # local var val 00:04:05.118 19:04:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.118 19:04:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.118 19:04:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.118 19:04:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.118 19:04:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.118 19:04:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8547344 kB' 'MemAvailable: 10504312 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854584 kB' 'Inactive: 1434844 kB' 'Active(anon): 128196 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119388 kB' 'Mapped: 48716 kB' 'Shmem: 10472 kB' 'KReclaimable: 65348 kB' 'Slab: 142836 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77488 kB' 'KernelStack: 6240 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.118 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.118 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.119 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.119 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.120 19:04:42 -- setup/common.sh@33 -- # echo 0 00:04:05.120 19:04:42 -- setup/common.sh@33 -- # return 0 00:04:05.120 19:04:42 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.120 nr_hugepages=512 00:04:05.120 19:04:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:05.120 resv_hugepages=0 00:04:05.120 19:04:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.120 surplus_hugepages=0 00:04:05.120 19:04:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.120 anon_hugepages=0 00:04:05.120 19:04:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.120 19:04:42 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.120 19:04:42 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:05.120 19:04:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.120 19:04:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.120 19:04:42 -- setup/common.sh@18 -- # local node= 00:04:05.120 19:04:42 -- setup/common.sh@19 -- # local var val 00:04:05.120 19:04:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.120 19:04:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.120 19:04:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.120 19:04:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.120 19:04:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.120 19:04:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8547596 kB' 'MemAvailable: 10504564 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854568 kB' 'Inactive: 1434844 kB' 'Active(anon): 128180 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119388 kB' 'Mapped: 48716 kB' 'Shmem: 10472 kB' 'KReclaimable: 65348 kB' 'Slab: 142836 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77488 kB' 'KernelStack: 6240 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.120 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.120 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.121 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.121 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.122 19:04:42 -- setup/common.sh@33 -- # echo 512 00:04:05.122 19:04:42 -- setup/common.sh@33 -- # return 0 00:04:05.122 19:04:42 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.122 19:04:42 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.122 19:04:42 -- setup/hugepages.sh@27 -- # local node 00:04:05.122 19:04:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.122 19:04:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.122 19:04:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.122 19:04:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.122 19:04:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.122 19:04:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.122 19:04:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.122 19:04:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.122 19:04:42 -- setup/common.sh@18 -- # local node=0 00:04:05.122 19:04:42 -- setup/common.sh@19 -- # local var val 00:04:05.122 19:04:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.122 19:04:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.122 19:04:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.122 19:04:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.122 19:04:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.122 19:04:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8547596 kB' 'MemUsed: 3694376 kB' 'SwapCached: 0 kB' 'Active: 854556 kB' 'Inactive: 1434844 kB' 'Active(anon): 128168 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2171704 kB' 'Mapped: 48716 kB' 'AnonPages: 119384 kB' 'Shmem: 10472 kB' 'KernelStack: 6240 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65348 kB' 'Slab: 142836 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.122 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.122 19:04:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # continue 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.123 19:04:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.123 19:04:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.123 19:04:42 -- setup/common.sh@33 -- # echo 0 00:04:05.123 19:04:42 -- setup/common.sh@33 -- # return 0 00:04:05.123 19:04:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.123 19:04:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.123 19:04:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.123 19:04:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.123 node0=512 expecting 512 00:04:05.123 19:04:42 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.123 19:04:42 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:05.123 00:04:05.123 real 0m0.709s 00:04:05.123 user 0m0.357s 00:04:05.123 sys 0m0.396s 00:04:05.123 19:04:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.123 19:04:42 -- common/autotest_common.sh@10 -- # set +x 00:04:05.123 ************************************ 00:04:05.123 END TEST per_node_1G_alloc 00:04:05.123 ************************************ 00:04:05.382 19:04:42 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:05.382 19:04:42 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:05.382 19:04:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:05.382 19:04:42 -- common/autotest_common.sh@10 -- # set +x 00:04:05.382 ************************************ 00:04:05.382 START TEST even_2G_alloc 00:04:05.382 ************************************ 00:04:05.382 19:04:42 -- common/autotest_common.sh@1102 -- # even_2G_alloc 00:04:05.382 19:04:42 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:05.382 19:04:42 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.382 19:04:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.382 19:04:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.382 19:04:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.382 19:04:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.382 19:04:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.382 19:04:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.382 19:04:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.382 19:04:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.382 19:04:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.382 19:04:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.382 19:04:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.382 19:04:42 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.382 19:04:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.382 19:04:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:05.382 19:04:42 -- setup/hugepages.sh@83 -- # : 0 00:04:05.382 19:04:42 -- setup/hugepages.sh@84 -- # : 0 00:04:05.382 19:04:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.382 19:04:42 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:05.382 19:04:42 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:05.382 19:04:42 -- setup/hugepages.sh@153 -- # setup output 00:04:05.382 19:04:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.382 19:04:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.641 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.641 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.641 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.641 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.903 19:04:43 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:05.903 19:04:43 -- setup/hugepages.sh@89 -- # local node 00:04:05.903 19:04:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.903 19:04:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.903 19:04:43 -- setup/hugepages.sh@92 -- # local surp 00:04:05.903 19:04:43 -- setup/hugepages.sh@93 -- # local resv 00:04:05.903 19:04:43 -- setup/hugepages.sh@94 -- # local anon 00:04:05.903 19:04:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.903 19:04:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.903 19:04:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.903 19:04:43 -- setup/common.sh@18 -- # local node= 00:04:05.903 19:04:43 -- setup/common.sh@19 -- # local var val 00:04:05.903 19:04:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.903 19:04:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.903 19:04:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.903 19:04:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.903 19:04:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.903 19:04:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.903 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.903 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7508488 kB' 'MemAvailable: 9465456 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854728 kB' 'Inactive: 1434844 kB' 'Active(anon): 128340 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119428 kB' 'Mapped: 48864 kB' 'Shmem: 10472 kB' 'KReclaimable: 65348 kB' 'Slab: 142828 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77480 kB' 'KernelStack: 6268 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.904 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.904 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.905 19:04:43 -- setup/common.sh@33 -- # echo 0 00:04:05.905 19:04:43 -- setup/common.sh@33 -- # return 0 00:04:05.905 19:04:43 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.905 19:04:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.905 19:04:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.905 19:04:43 -- setup/common.sh@18 -- # local node= 00:04:05.905 19:04:43 -- setup/common.sh@19 -- # local var val 00:04:05.905 19:04:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.905 19:04:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.905 19:04:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.905 19:04:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.905 19:04:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.905 19:04:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7508740 kB' 'MemAvailable: 9465708 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854504 kB' 'Inactive: 1434844 kB' 'Active(anon): 128116 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119476 kB' 'Mapped: 48804 kB' 'Shmem: 10472 kB' 'KReclaimable: 65348 kB' 'Slab: 142824 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77476 kB' 'KernelStack: 6220 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.905 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.905 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.906 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.906 19:04:43 -- setup/common.sh@33 -- # echo 0 00:04:05.906 19:04:43 -- setup/common.sh@33 -- # return 0 00:04:05.906 19:04:43 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.906 19:04:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.906 19:04:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.906 19:04:43 -- setup/common.sh@18 -- # local node= 00:04:05.906 19:04:43 -- setup/common.sh@19 -- # local var val 00:04:05.906 19:04:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.906 19:04:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.906 19:04:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.906 19:04:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.906 19:04:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.906 19:04:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.906 19:04:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7508488 kB' 'MemAvailable: 9465456 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854676 kB' 'Inactive: 1434844 kB' 'Active(anon): 128288 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119416 kB' 'Mapped: 48676 kB' 'Shmem: 10472 kB' 'KReclaimable: 65348 kB' 'Slab: 142816 kB' 'SReclaimable: 65348 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6252 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:05.906 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.907 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.907 19:04:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.908 19:04:43 -- setup/common.sh@33 -- # echo 0 00:04:05.908 19:04:43 -- setup/common.sh@33 -- # return 0 00:04:05.908 nr_hugepages=1024 00:04:05.908 19:04:43 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.908 19:04:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.908 19:04:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.908 resv_hugepages=0 00:04:05.908 surplus_hugepages=0 00:04:05.908 anon_hugepages=0 00:04:05.908 19:04:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.908 19:04:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.908 19:04:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.908 19:04:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.908 19:04:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.908 19:04:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.908 19:04:43 -- setup/common.sh@18 -- # local node= 00:04:05.908 19:04:43 -- setup/common.sh@19 -- # local var val 00:04:05.908 19:04:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.908 19:04:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.908 19:04:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.908 19:04:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.908 19:04:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.908 19:04:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7508488 kB' 'MemAvailable: 9465448 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854584 kB' 'Inactive: 1434844 kB' 'Active(anon): 128196 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119300 kB' 'Mapped: 48676 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142796 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77460 kB' 'KernelStack: 6236 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.908 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.908 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 19:04:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.910 19:04:43 -- setup/common.sh@33 -- # echo 1024 00:04:05.910 19:04:43 -- setup/common.sh@33 -- # return 0 00:04:05.910 19:04:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.910 19:04:43 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.910 19:04:43 -- setup/hugepages.sh@27 -- # local node 00:04:05.910 19:04:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.910 19:04:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.910 19:04:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.910 19:04:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.910 19:04:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.910 19:04:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.910 19:04:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.910 19:04:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.910 19:04:43 -- setup/common.sh@18 -- # local node=0 00:04:05.910 19:04:43 -- setup/common.sh@19 -- # local var val 00:04:05.910 19:04:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.910 19:04:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.910 19:04:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.910 19:04:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.910 19:04:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.910 19:04:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.910 19:04:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7507988 kB' 'MemUsed: 4733984 kB' 'SwapCached: 0 kB' 'Active: 854668 kB' 'Inactive: 1434844 kB' 'Active(anon): 128280 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2171704 kB' 'Mapped: 48676 kB' 'AnonPages: 119428 kB' 'Shmem: 10472 kB' 'KernelStack: 6252 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65336 kB' 'Slab: 142796 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.910 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.910 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # continue 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 19:04:43 -- setup/common.sh@33 -- # echo 0 00:04:05.911 19:04:43 -- setup/common.sh@33 -- # return 0 00:04:05.911 node0=1024 expecting 1024 00:04:05.911 ************************************ 00:04:05.911 END TEST even_2G_alloc 00:04:05.911 ************************************ 00:04:05.911 19:04:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.911 19:04:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.911 19:04:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.911 19:04:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.911 19:04:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.911 19:04:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.911 00:04:05.911 real 0m0.717s 00:04:05.911 user 0m0.340s 00:04:05.911 sys 0m0.395s 00:04:05.911 19:04:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.911 19:04:43 -- common/autotest_common.sh@10 -- # set +x 00:04:05.911 19:04:43 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:05.911 19:04:43 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:05.911 19:04:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:05.911 19:04:43 -- common/autotest_common.sh@10 -- # set +x 00:04:05.911 ************************************ 00:04:05.911 START TEST odd_alloc 00:04:05.911 ************************************ 00:04:05.911 19:04:43 -- common/autotest_common.sh@1102 -- # odd_alloc 00:04:05.911 19:04:43 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:05.911 19:04:43 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:05.911 19:04:43 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.911 19:04:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.911 19:04:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:05.911 19:04:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.911 19:04:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.911 19:04:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.911 19:04:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:05.911 19:04:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.911 19:04:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.911 19:04:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.911 19:04:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.911 19:04:43 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.911 19:04:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.911 19:04:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:05.911 19:04:43 -- setup/hugepages.sh@83 -- # : 0 00:04:05.911 19:04:43 -- setup/hugepages.sh@84 -- # : 0 00:04:05.911 19:04:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.911 19:04:43 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:05.911 19:04:43 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:05.911 19:04:43 -- setup/hugepages.sh@160 -- # setup output 00:04:05.911 19:04:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.911 19:04:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.478 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.478 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.478 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.478 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.478 19:04:43 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:06.478 19:04:43 -- setup/hugepages.sh@89 -- # local node 00:04:06.478 19:04:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.478 19:04:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.478 19:04:43 -- setup/hugepages.sh@92 -- # local surp 00:04:06.478 19:04:43 -- setup/hugepages.sh@93 -- # local resv 00:04:06.478 19:04:43 -- setup/hugepages.sh@94 -- # local anon 00:04:06.478 19:04:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.478 19:04:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.478 19:04:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.478 19:04:43 -- setup/common.sh@18 -- # local node= 00:04:06.478 19:04:43 -- setup/common.sh@19 -- # local var val 00:04:06.478 19:04:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.478 19:04:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.478 19:04:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.478 19:04:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.478 19:04:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.478 19:04:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7502156 kB' 'MemAvailable: 9459116 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 855168 kB' 'Inactive: 1434844 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119956 kB' 'Mapped: 48972 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142748 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77412 kB' 'KernelStack: 6268 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.479 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.479 19:04:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.480 19:04:43 -- setup/common.sh@33 -- # echo 0 00:04:06.480 19:04:43 -- setup/common.sh@33 -- # return 0 00:04:06.480 19:04:43 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.480 19:04:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.480 19:04:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.480 19:04:43 -- setup/common.sh@18 -- # local node= 00:04:06.480 19:04:43 -- setup/common.sh@19 -- # local var val 00:04:06.480 19:04:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.480 19:04:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.480 19:04:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.480 19:04:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.480 19:04:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.480 19:04:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7509544 kB' 'MemAvailable: 9466504 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854472 kB' 'Inactive: 1434844 kB' 'Active(anon): 128084 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119468 kB' 'Mapped: 48600 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142724 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77388 kB' 'KernelStack: 6244 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.480 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.480 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.481 19:04:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.481 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.481 19:04:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.481 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.481 19:04:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.481 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.481 19:04:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.481 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.481 19:04:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.481 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.481 19:04:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.481 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.481 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.742 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.742 19:04:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.743 19:04:43 -- setup/common.sh@33 -- # echo 0 00:04:06.743 19:04:43 -- setup/common.sh@33 -- # return 0 00:04:06.743 19:04:43 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.743 19:04:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.743 19:04:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.743 19:04:43 -- setup/common.sh@18 -- # local node= 00:04:06.743 19:04:43 -- setup/common.sh@19 -- # local var val 00:04:06.743 19:04:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.743 19:04:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.743 19:04:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.743 19:04:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.743 19:04:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.743 19:04:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7509292 kB' 'MemAvailable: 9466252 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854420 kB' 'Inactive: 1434844 kB' 'Active(anon): 128032 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119404 kB' 'Mapped: 48712 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142792 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77456 kB' 'KernelStack: 6240 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.743 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.743 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.744 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.744 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.744 19:04:43 -- setup/common.sh@33 -- # echo 0 00:04:06.744 19:04:43 -- setup/common.sh@33 -- # return 0 00:04:06.744 nr_hugepages=1025 00:04:06.744 19:04:43 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.744 19:04:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:06.744 resv_hugepages=0 00:04:06.744 19:04:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.744 surplus_hugepages=0 00:04:06.744 anon_hugepages=0 00:04:06.744 19:04:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.744 19:04:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.744 19:04:43 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:06.745 19:04:43 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:06.745 19:04:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.745 19:04:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.745 19:04:43 -- setup/common.sh@18 -- # local node= 00:04:06.745 19:04:43 -- setup/common.sh@19 -- # local var val 00:04:06.745 19:04:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.745 19:04:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.745 19:04:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.745 19:04:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.745 19:04:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.745 19:04:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7509044 kB' 'MemAvailable: 9466004 kB' 'Buffers: 2436 kB' 'Cached: 2169268 kB' 'SwapCached: 0 kB' 'Active: 854652 kB' 'Inactive: 1434844 kB' 'Active(anon): 128264 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119400 kB' 'Mapped: 48712 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142788 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77452 kB' 'KernelStack: 6240 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.745 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.745 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.746 19:04:43 -- setup/common.sh@33 -- # echo 1025 00:04:06.746 19:04:43 -- setup/common.sh@33 -- # return 0 00:04:06.746 19:04:43 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:06.746 19:04:43 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.746 19:04:43 -- setup/hugepages.sh@27 -- # local node 00:04:06.746 19:04:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.746 19:04:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:06.746 19:04:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.746 19:04:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.746 19:04:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.746 19:04:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.746 19:04:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.746 19:04:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.746 19:04:43 -- setup/common.sh@18 -- # local node=0 00:04:06.746 19:04:43 -- setup/common.sh@19 -- # local var val 00:04:06.746 19:04:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.746 19:04:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.746 19:04:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.746 19:04:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.746 19:04:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.746 19:04:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7508544 kB' 'MemUsed: 4733428 kB' 'SwapCached: 0 kB' 'Active: 854468 kB' 'Inactive: 1434844 kB' 'Active(anon): 128080 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2171704 kB' 'Mapped: 48712 kB' 'AnonPages: 119208 kB' 'Shmem: 10472 kB' 'KernelStack: 6224 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65336 kB' 'Slab: 142788 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:06.746 19:04:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.746 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # continue 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.747 19:04:44 -- setup/common.sh@33 -- # echo 0 00:04:06.747 19:04:44 -- setup/common.sh@33 -- # return 0 00:04:06.747 19:04:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.747 19:04:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.748 19:04:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.748 19:04:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.748 19:04:44 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:06.748 node0=1025 expecting 1025 00:04:06.748 19:04:44 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:06.748 00:04:06.748 real 0m0.728s 00:04:06.748 user 0m0.315s 00:04:06.748 sys 0m0.417s 00:04:06.748 19:04:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.748 ************************************ 00:04:06.748 END TEST odd_alloc 00:04:06.748 ************************************ 00:04:06.748 19:04:44 -- common/autotest_common.sh@10 -- # set +x 00:04:06.748 19:04:44 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:06.748 19:04:44 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:06.748 19:04:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:06.748 19:04:44 -- common/autotest_common.sh@10 -- # set +x 00:04:06.748 ************************************ 00:04:06.748 START TEST custom_alloc 00:04:06.748 ************************************ 00:04:06.748 19:04:44 -- common/autotest_common.sh@1102 -- # custom_alloc 00:04:06.748 19:04:44 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:06.748 19:04:44 -- setup/hugepages.sh@169 -- # local node 00:04:06.748 19:04:44 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:06.748 19:04:44 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:06.748 19:04:44 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:06.748 19:04:44 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:06.748 19:04:44 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:06.748 19:04:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.748 19:04:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.748 19:04:44 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:06.748 19:04:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.748 19:04:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.748 19:04:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.748 19:04:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:06.748 19:04:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:06.748 19:04:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.748 19:04:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.748 19:04:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.748 19:04:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:06.748 19:04:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.748 19:04:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:06.748 19:04:44 -- setup/hugepages.sh@83 -- # : 0 00:04:06.748 19:04:44 -- setup/hugepages.sh@84 -- # : 0 00:04:06.748 19:04:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.748 19:04:44 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:06.748 19:04:44 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:06.748 19:04:44 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:06.748 19:04:44 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:06.748 19:04:44 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:06.748 19:04:44 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:06.748 19:04:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.748 19:04:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.748 19:04:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:06.748 19:04:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:06.748 19:04:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.748 19:04:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.748 19:04:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.748 19:04:44 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:06.748 19:04:44 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.748 19:04:44 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:06.748 19:04:44 -- setup/hugepages.sh@78 -- # return 0 00:04:06.748 19:04:44 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:06.748 19:04:44 -- setup/hugepages.sh@187 -- # setup output 00:04:06.748 19:04:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.748 19:04:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.319 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.319 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.319 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.319 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.319 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.319 19:04:44 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:07.319 19:04:44 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:07.319 19:04:44 -- setup/hugepages.sh@89 -- # local node 00:04:07.319 19:04:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.319 19:04:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.319 19:04:44 -- setup/hugepages.sh@92 -- # local surp 00:04:07.319 19:04:44 -- setup/hugepages.sh@93 -- # local resv 00:04:07.319 19:04:44 -- setup/hugepages.sh@94 -- # local anon 00:04:07.319 19:04:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.319 19:04:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.319 19:04:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.319 19:04:44 -- setup/common.sh@18 -- # local node= 00:04:07.319 19:04:44 -- setup/common.sh@19 -- # local var val 00:04:07.319 19:04:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.319 19:04:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.319 19:04:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.319 19:04:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.319 19:04:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.319 19:04:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.319 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.319 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8555824 kB' 'MemAvailable: 10512788 kB' 'Buffers: 2436 kB' 'Cached: 2169272 kB' 'SwapCached: 0 kB' 'Active: 854884 kB' 'Inactive: 1434848 kB' 'Active(anon): 128496 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119588 kB' 'Mapped: 48696 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142800 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77464 kB' 'KernelStack: 6248 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.320 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.320 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.321 19:04:44 -- setup/common.sh@33 -- # echo 0 00:04:07.321 19:04:44 -- setup/common.sh@33 -- # return 0 00:04:07.321 19:04:44 -- setup/hugepages.sh@97 -- # anon=0 00:04:07.321 19:04:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.321 19:04:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.321 19:04:44 -- setup/common.sh@18 -- # local node= 00:04:07.321 19:04:44 -- setup/common.sh@19 -- # local var val 00:04:07.321 19:04:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.321 19:04:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.321 19:04:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.321 19:04:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.321 19:04:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.321 19:04:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8555824 kB' 'MemAvailable: 10512788 kB' 'Buffers: 2436 kB' 'Cached: 2169276 kB' 'SwapCached: 0 kB' 'Active: 854728 kB' 'Inactive: 1434848 kB' 'Active(anon): 128340 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119260 kB' 'Mapped: 48712 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142804 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6240 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.321 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.321 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.322 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.322 19:04:44 -- setup/common.sh@33 -- # echo 0 00:04:07.322 19:04:44 -- setup/common.sh@33 -- # return 0 00:04:07.322 19:04:44 -- setup/hugepages.sh@99 -- # surp=0 00:04:07.322 19:04:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.322 19:04:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.322 19:04:44 -- setup/common.sh@18 -- # local node= 00:04:07.322 19:04:44 -- setup/common.sh@19 -- # local var val 00:04:07.322 19:04:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.322 19:04:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.322 19:04:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.322 19:04:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.322 19:04:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.322 19:04:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.322 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8555824 kB' 'MemAvailable: 10512788 kB' 'Buffers: 2436 kB' 'Cached: 2169276 kB' 'SwapCached: 0 kB' 'Active: 854572 kB' 'Inactive: 1434848 kB' 'Active(anon): 128184 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119344 kB' 'Mapped: 48712 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142796 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77460 kB' 'KernelStack: 6224 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.323 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.323 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.324 19:04:44 -- setup/common.sh@33 -- # echo 0 00:04:07.324 19:04:44 -- setup/common.sh@33 -- # return 0 00:04:07.324 nr_hugepages=512 00:04:07.324 resv_hugepages=0 00:04:07.324 19:04:44 -- setup/hugepages.sh@100 -- # resv=0 00:04:07.324 19:04:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:07.324 19:04:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.324 surplus_hugepages=0 00:04:07.324 19:04:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.324 anon_hugepages=0 00:04:07.324 19:04:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.324 19:04:44 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:07.324 19:04:44 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:07.324 19:04:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.324 19:04:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.324 19:04:44 -- setup/common.sh@18 -- # local node= 00:04:07.324 19:04:44 -- setup/common.sh@19 -- # local var val 00:04:07.324 19:04:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.324 19:04:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.324 19:04:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.324 19:04:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.324 19:04:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.324 19:04:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.324 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.324 19:04:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8555068 kB' 'MemAvailable: 10512032 kB' 'Buffers: 2436 kB' 'Cached: 2169276 kB' 'SwapCached: 0 kB' 'Active: 854596 kB' 'Inactive: 1434848 kB' 'Active(anon): 128208 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119372 kB' 'Mapped: 48712 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142796 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77460 kB' 'KernelStack: 6224 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.324 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.584 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.584 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.585 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.585 19:04:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.586 19:04:44 -- setup/common.sh@33 -- # echo 512 00:04:07.586 19:04:44 -- setup/common.sh@33 -- # return 0 00:04:07.586 19:04:44 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:07.586 19:04:44 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.586 19:04:44 -- setup/hugepages.sh@27 -- # local node 00:04:07.586 19:04:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.586 19:04:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.586 19:04:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:07.586 19:04:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.586 19:04:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.586 19:04:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.586 19:04:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.586 19:04:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.586 19:04:44 -- setup/common.sh@18 -- # local node=0 00:04:07.586 19:04:44 -- setup/common.sh@19 -- # local var val 00:04:07.586 19:04:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.586 19:04:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.586 19:04:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.586 19:04:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.586 19:04:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.586 19:04:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8555068 kB' 'MemUsed: 3686904 kB' 'SwapCached: 0 kB' 'Active: 854580 kB' 'Inactive: 1434848 kB' 'Active(anon): 128192 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2171712 kB' 'Mapped: 48712 kB' 'AnonPages: 119352 kB' 'Shmem: 10472 kB' 'KernelStack: 6224 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65336 kB' 'Slab: 142796 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.586 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.586 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # continue 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 19:04:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 19:04:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.587 19:04:44 -- setup/common.sh@33 -- # echo 0 00:04:07.587 19:04:44 -- setup/common.sh@33 -- # return 0 00:04:07.587 19:04:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.587 19:04:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.587 node0=512 expecting 512 00:04:07.587 19:04:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.587 19:04:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.587 19:04:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:07.587 19:04:44 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:07.587 00:04:07.587 real 0m0.717s 00:04:07.587 user 0m0.332s 00:04:07.587 sys 0m0.399s 00:04:07.587 19:04:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:07.587 19:04:44 -- common/autotest_common.sh@10 -- # set +x 00:04:07.587 ************************************ 00:04:07.587 END TEST custom_alloc 00:04:07.587 ************************************ 00:04:07.587 19:04:44 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:07.587 19:04:44 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:07.587 19:04:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:07.587 19:04:44 -- common/autotest_common.sh@10 -- # set +x 00:04:07.587 ************************************ 00:04:07.587 START TEST no_shrink_alloc 00:04:07.587 ************************************ 00:04:07.587 19:04:44 -- common/autotest_common.sh@1102 -- # no_shrink_alloc 00:04:07.587 19:04:44 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:07.587 19:04:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:07.588 19:04:44 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:07.588 19:04:44 -- setup/hugepages.sh@51 -- # shift 00:04:07.588 19:04:44 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:07.588 19:04:44 -- setup/hugepages.sh@52 -- # local node_ids 00:04:07.588 19:04:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.588 19:04:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:07.588 19:04:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:07.588 19:04:44 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:07.588 19:04:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.588 19:04:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:07.588 19:04:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:07.588 19:04:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.588 19:04:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.588 19:04:44 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:07.588 19:04:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.588 19:04:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:07.588 19:04:44 -- setup/hugepages.sh@73 -- # return 0 00:04:07.588 19:04:44 -- setup/hugepages.sh@198 -- # setup output 00:04:07.588 19:04:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.588 19:04:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.159 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.159 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.159 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.159 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.159 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.159 19:04:45 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:08.159 19:04:45 -- setup/hugepages.sh@89 -- # local node 00:04:08.159 19:04:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.159 19:04:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.159 19:04:45 -- setup/hugepages.sh@92 -- # local surp 00:04:08.159 19:04:45 -- setup/hugepages.sh@93 -- # local resv 00:04:08.159 19:04:45 -- setup/hugepages.sh@94 -- # local anon 00:04:08.159 19:04:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.159 19:04:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.159 19:04:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.159 19:04:45 -- setup/common.sh@18 -- # local node= 00:04:08.159 19:04:45 -- setup/common.sh@19 -- # local var val 00:04:08.159 19:04:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.159 19:04:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.159 19:04:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.159 19:04:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.159 19:04:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.159 19:04:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7505044 kB' 'MemAvailable: 9462012 kB' 'Buffers: 2436 kB' 'Cached: 2169276 kB' 'SwapCached: 0 kB' 'Active: 854720 kB' 'Inactive: 1434852 kB' 'Active(anon): 128332 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119432 kB' 'Mapped: 48828 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142844 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77508 kB' 'KernelStack: 6320 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.159 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.159 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.160 19:04:45 -- setup/common.sh@33 -- # echo 0 00:04:08.160 19:04:45 -- setup/common.sh@33 -- # return 0 00:04:08.160 19:04:45 -- setup/hugepages.sh@97 -- # anon=0 00:04:08.160 19:04:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.160 19:04:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.160 19:04:45 -- setup/common.sh@18 -- # local node= 00:04:08.160 19:04:45 -- setup/common.sh@19 -- # local var val 00:04:08.160 19:04:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.160 19:04:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.160 19:04:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.160 19:04:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.160 19:04:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.160 19:04:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7505396 kB' 'MemAvailable: 9462364 kB' 'Buffers: 2436 kB' 'Cached: 2169276 kB' 'SwapCached: 0 kB' 'Active: 854672 kB' 'Inactive: 1434852 kB' 'Active(anon): 128284 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119452 kB' 'Mapped: 48772 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142824 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77488 kB' 'KernelStack: 6272 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.160 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.160 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.161 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.161 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.162 19:04:45 -- setup/common.sh@33 -- # echo 0 00:04:08.162 19:04:45 -- setup/common.sh@33 -- # return 0 00:04:08.162 19:04:45 -- setup/hugepages.sh@99 -- # surp=0 00:04:08.162 19:04:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.162 19:04:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.162 19:04:45 -- setup/common.sh@18 -- # local node= 00:04:08.162 19:04:45 -- setup/common.sh@19 -- # local var val 00:04:08.162 19:04:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.162 19:04:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.162 19:04:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.162 19:04:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.162 19:04:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.162 19:04:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7505396 kB' 'MemAvailable: 9462364 kB' 'Buffers: 2436 kB' 'Cached: 2169276 kB' 'SwapCached: 0 kB' 'Active: 854756 kB' 'Inactive: 1434852 kB' 'Active(anon): 128368 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119492 kB' 'Mapped: 48712 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142808 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77472 kB' 'KernelStack: 6224 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.162 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.162 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.163 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.163 19:04:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.163 19:04:45 -- setup/common.sh@33 -- # echo 0 00:04:08.163 19:04:45 -- setup/common.sh@33 -- # return 0 00:04:08.163 nr_hugepages=1024 00:04:08.163 resv_hugepages=0 00:04:08.163 19:04:45 -- setup/hugepages.sh@100 -- # resv=0 00:04:08.163 19:04:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.163 19:04:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.163 surplus_hugepages=0 00:04:08.163 anon_hugepages=0 00:04:08.163 19:04:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.163 19:04:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.163 19:04:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.163 19:04:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.163 19:04:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.163 19:04:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.163 19:04:45 -- setup/common.sh@18 -- # local node= 00:04:08.163 19:04:45 -- setup/common.sh@19 -- # local var val 00:04:08.163 19:04:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.163 19:04:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.163 19:04:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.163 19:04:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.163 19:04:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.163 19:04:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7505396 kB' 'MemAvailable: 9462364 kB' 'Buffers: 2436 kB' 'Cached: 2169276 kB' 'SwapCached: 0 kB' 'Active: 854416 kB' 'Inactive: 1434852 kB' 'Active(anon): 128028 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119200 kB' 'Mapped: 48712 kB' 'Shmem: 10472 kB' 'KReclaimable: 65336 kB' 'Slab: 142804 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6224 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.164 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.164 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.165 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.165 19:04:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.165 19:04:45 -- setup/common.sh@33 -- # echo 1024 00:04:08.165 19:04:45 -- setup/common.sh@33 -- # return 0 00:04:08.165 19:04:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.165 19:04:45 -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.165 19:04:45 -- setup/hugepages.sh@27 -- # local node 00:04:08.165 19:04:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.165 19:04:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.165 19:04:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:08.165 19:04:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.165 19:04:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.165 19:04:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.165 19:04:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.165 19:04:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.165 19:04:45 -- setup/common.sh@18 -- # local node=0 00:04:08.165 19:04:45 -- setup/common.sh@19 -- # local var val 00:04:08.165 19:04:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.165 19:04:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.166 19:04:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.166 19:04:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.166 19:04:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.166 19:04:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7504640 kB' 'MemUsed: 4737332 kB' 'SwapCached: 0 kB' 'Active: 854460 kB' 'Inactive: 1434852 kB' 'Active(anon): 128072 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2171712 kB' 'Mapped: 48712 kB' 'AnonPages: 119188 kB' 'Shmem: 10472 kB' 'KernelStack: 6224 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65336 kB' 'Slab: 142808 kB' 'SReclaimable: 65336 kB' 'SUnreclaim: 77472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.166 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.166 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.167 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.167 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.167 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.167 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.167 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # continue 00:04:08.167 19:04:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.167 19:04:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.167 19:04:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.167 19:04:45 -- setup/common.sh@33 -- # echo 0 00:04:08.167 19:04:45 -- setup/common.sh@33 -- # return 0 00:04:08.167 19:04:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.167 19:04:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.167 19:04:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.167 19:04:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.167 19:04:45 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.167 node0=1024 expecting 1024 00:04:08.167 19:04:45 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.167 19:04:45 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:08.167 19:04:45 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:08.167 19:04:45 -- setup/hugepages.sh@202 -- # setup output 00:04:08.167 19:04:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.167 19:04:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.736 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.736 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.736 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.736 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.736 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:08.736 19:04:46 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:08.736 19:04:46 -- setup/hugepages.sh@89 -- # local node 00:04:08.736 19:04:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.736 19:04:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.736 19:04:46 -- setup/hugepages.sh@92 -- # local surp 00:04:08.736 19:04:46 -- setup/hugepages.sh@93 -- # local resv 00:04:08.736 19:04:46 -- setup/hugepages.sh@94 -- # local anon 00:04:08.736 19:04:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.736 19:04:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.736 19:04:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.736 19:04:46 -- setup/common.sh@18 -- # local node= 00:04:08.736 19:04:46 -- setup/common.sh@19 -- # local var val 00:04:08.736 19:04:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.736 19:04:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.736 19:04:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.736 19:04:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.736 19:04:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.736 19:04:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7511408 kB' 'MemAvailable: 9468372 kB' 'Buffers: 2436 kB' 'Cached: 2169276 kB' 'SwapCached: 0 kB' 'Active: 851908 kB' 'Inactive: 1434852 kB' 'Active(anon): 125520 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 116684 kB' 'Mapped: 48108 kB' 'Shmem: 10472 kB' 'KReclaimable: 65328 kB' 'Slab: 142680 kB' 'SReclaimable: 65328 kB' 'SUnreclaim: 77352 kB' 'KernelStack: 6216 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.737 19:04:46 -- setup/common.sh@33 -- # echo 0 00:04:08.737 19:04:46 -- setup/common.sh@33 -- # return 0 00:04:08.737 19:04:46 -- setup/hugepages.sh@97 -- # anon=0 00:04:08.737 19:04:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.737 19:04:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.737 19:04:46 -- setup/common.sh@18 -- # local node= 00:04:08.737 19:04:46 -- setup/common.sh@19 -- # local var val 00:04:08.737 19:04:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.737 19:04:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.737 19:04:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.737 19:04:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.737 19:04:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.737 19:04:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7511528 kB' 'MemAvailable: 9468492 kB' 'Buffers: 2436 kB' 'Cached: 2169276 kB' 'SwapCached: 0 kB' 'Active: 851568 kB' 'Inactive: 1434852 kB' 'Active(anon): 125180 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 116268 kB' 'Mapped: 47980 kB' 'Shmem: 10472 kB' 'KReclaimable: 65328 kB' 'Slab: 142688 kB' 'SReclaimable: 65328 kB' 'SUnreclaim: 77360 kB' 'KernelStack: 6204 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 19:04:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.739 19:04:46 -- setup/common.sh@33 -- # echo 0 00:04:08.739 19:04:46 -- setup/common.sh@33 -- # return 0 00:04:08.739 19:04:46 -- setup/hugepages.sh@99 -- # surp=0 00:04:08.739 19:04:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.739 19:04:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.739 19:04:46 -- setup/common.sh@18 -- # local node= 00:04:08.739 19:04:46 -- setup/common.sh@19 -- # local var val 00:04:08.739 19:04:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.739 19:04:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.739 19:04:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.739 19:04:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.739 19:04:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.739 19:04:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7511524 kB' 'MemAvailable: 9468488 kB' 'Buffers: 2436 kB' 'Cached: 2169276 kB' 'SwapCached: 0 kB' 'Active: 851436 kB' 'Inactive: 1434852 kB' 'Active(anon): 125048 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 116400 kB' 'Mapped: 47912 kB' 'Shmem: 10472 kB' 'KReclaimable: 65328 kB' 'Slab: 142684 kB' 'SReclaimable: 65328 kB' 'SUnreclaim: 77356 kB' 'KernelStack: 6144 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # continue 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 19:04:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.000 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.000 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.001 19:04:46 -- setup/common.sh@33 -- # echo 0 00:04:09.001 19:04:46 -- setup/common.sh@33 -- # return 0 00:04:09.001 nr_hugepages=1024 00:04:09.001 resv_hugepages=0 00:04:09.001 surplus_hugepages=0 00:04:09.001 19:04:46 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.001 19:04:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.001 19:04:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.001 19:04:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.001 anon_hugepages=0 00:04:09.001 19:04:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.001 19:04:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.001 19:04:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.001 19:04:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.001 19:04:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.001 19:04:46 -- setup/common.sh@18 -- # local node= 00:04:09.001 19:04:46 -- setup/common.sh@19 -- # local var val 00:04:09.001 19:04:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.001 19:04:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.001 19:04:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.001 19:04:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.001 19:04:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.001 19:04:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7511524 kB' 'MemAvailable: 9468488 kB' 'Buffers: 2436 kB' 'Cached: 2169276 kB' 'SwapCached: 0 kB' 'Active: 851320 kB' 'Inactive: 1434852 kB' 'Active(anon): 124932 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 116336 kB' 'Mapped: 47972 kB' 'Shmem: 10472 kB' 'KReclaimable: 65328 kB' 'Slab: 142684 kB' 'SReclaimable: 65328 kB' 'SUnreclaim: 77356 kB' 'KernelStack: 6160 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.001 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.001 19:04:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.002 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.002 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.003 19:04:46 -- setup/common.sh@33 -- # echo 1024 00:04:09.003 19:04:46 -- setup/common.sh@33 -- # return 0 00:04:09.003 19:04:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.003 19:04:46 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.003 19:04:46 -- setup/hugepages.sh@27 -- # local node 00:04:09.003 19:04:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.003 19:04:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.003 19:04:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:09.003 19:04:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.003 19:04:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.003 19:04:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.003 19:04:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.003 19:04:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.003 19:04:46 -- setup/common.sh@18 -- # local node=0 00:04:09.003 19:04:46 -- setup/common.sh@19 -- # local var val 00:04:09.003 19:04:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.003 19:04:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.003 19:04:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.003 19:04:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.003 19:04:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.003 19:04:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7511524 kB' 'MemUsed: 4730448 kB' 'SwapCached: 0 kB' 'Active: 851624 kB' 'Inactive: 1434852 kB' 'Active(anon): 125236 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1434852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2171712 kB' 'Mapped: 47972 kB' 'AnonPages: 116384 kB' 'Shmem: 10472 kB' 'KernelStack: 6160 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65328 kB' 'Slab: 142684 kB' 'SReclaimable: 65328 kB' 'SUnreclaim: 77356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.003 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.003 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # continue 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.004 19:04:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.004 19:04:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.004 19:04:46 -- setup/common.sh@33 -- # echo 0 00:04:09.004 19:04:46 -- setup/common.sh@33 -- # return 0 00:04:09.004 19:04:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.004 19:04:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.004 19:04:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.004 19:04:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.004 19:04:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.004 node0=1024 expecting 1024 00:04:09.004 ************************************ 00:04:09.004 END TEST no_shrink_alloc 00:04:09.004 ************************************ 00:04:09.004 19:04:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.004 00:04:09.004 real 0m1.403s 00:04:09.004 user 0m0.666s 00:04:09.004 sys 0m0.778s 00:04:09.004 19:04:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.004 19:04:46 -- common/autotest_common.sh@10 -- # set +x 00:04:09.004 19:04:46 -- setup/hugepages.sh@217 -- # clear_hp 00:04:09.004 19:04:46 -- setup/hugepages.sh@37 -- # local node hp 00:04:09.004 19:04:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.004 19:04:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.004 19:04:46 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.004 19:04:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.004 19:04:46 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.004 19:04:46 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:09.004 19:04:46 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:09.004 00:04:09.004 real 0m6.166s 00:04:09.004 user 0m2.834s 00:04:09.004 sys 0m3.418s 00:04:09.004 19:04:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.004 ************************************ 00:04:09.004 END TEST hugepages 00:04:09.004 ************************************ 00:04:09.004 19:04:46 -- common/autotest_common.sh@10 -- # set +x 00:04:09.004 19:04:46 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:09.004 19:04:46 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:09.004 19:04:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:09.004 19:04:46 -- common/autotest_common.sh@10 -- # set +x 00:04:09.004 ************************************ 00:04:09.004 START TEST driver 00:04:09.004 ************************************ 00:04:09.004 19:04:46 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:09.263 * Looking for test storage... 00:04:09.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:09.263 19:04:46 -- setup/driver.sh@68 -- # setup reset 00:04:09.263 19:04:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.263 19:04:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.825 19:04:52 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:15.825 19:04:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:15.825 19:04:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:15.825 19:04:52 -- common/autotest_common.sh@10 -- # set +x 00:04:15.825 ************************************ 00:04:15.825 START TEST guess_driver 00:04:15.825 ************************************ 00:04:15.825 19:04:52 -- common/autotest_common.sh@1102 -- # guess_driver 00:04:15.825 19:04:52 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:15.825 19:04:52 -- setup/driver.sh@47 -- # local fail=0 00:04:15.825 19:04:52 -- setup/driver.sh@49 -- # pick_driver 00:04:15.825 19:04:52 -- setup/driver.sh@36 -- # vfio 00:04:15.825 19:04:52 -- setup/driver.sh@21 -- # local iommu_grups 00:04:15.825 19:04:52 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:15.825 19:04:52 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:15.826 19:04:52 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:15.826 19:04:52 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:15.826 19:04:52 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:15.826 19:04:52 -- setup/driver.sh@32 -- # return 1 00:04:15.826 19:04:52 -- setup/driver.sh@38 -- # uio 00:04:15.826 19:04:52 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:15.826 19:04:52 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:15.826 19:04:52 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:15.826 19:04:52 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:15.826 19:04:52 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:15.826 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:15.826 19:04:52 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:15.826 19:04:52 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:15.826 Looking for driver=uio_pci_generic 00:04:15.826 19:04:52 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:15.826 19:04:52 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:15.826 19:04:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.826 19:04:52 -- setup/driver.sh@45 -- # setup output config 00:04:15.826 19:04:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.826 19:04:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:15.826 lsblk: /dev/nvme0c0n1: not a block device 00:04:16.086 19:04:53 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:16.086 19:04:53 -- setup/driver.sh@58 -- # continue 00:04:16.086 19:04:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.345 19:04:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.345 19:04:53 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:16.345 19:04:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.345 19:04:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.345 19:04:53 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:16.345 19:04:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.345 19:04:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.345 19:04:53 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:16.345 19:04:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.345 19:04:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.345 19:04:53 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:16.345 19:04:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.345 19:04:53 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:16.345 19:04:53 -- setup/driver.sh@65 -- # setup reset 00:04:16.345 19:04:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.345 19:04:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.909 ************************************ 00:04:22.909 END TEST guess_driver 00:04:22.909 ************************************ 00:04:22.909 00:04:22.909 real 0m7.278s 00:04:22.909 user 0m0.861s 00:04:22.909 sys 0m1.580s 00:04:22.909 19:04:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.909 19:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:22.909 ************************************ 00:04:22.909 END TEST driver 00:04:22.909 ************************************ 00:04:22.909 00:04:22.910 real 0m13.310s 00:04:22.910 user 0m1.250s 00:04:22.910 sys 0m2.370s 00:04:22.910 19:04:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.910 19:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:22.910 19:04:59 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:22.910 19:04:59 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:22.910 19:04:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:22.910 19:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:22.910 ************************************ 00:04:22.910 START TEST devices 00:04:22.910 ************************************ 00:04:22.910 19:04:59 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:22.910 * Looking for test storage... 00:04:22.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:22.910 19:04:59 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:22.910 19:04:59 -- setup/devices.sh@192 -- # setup reset 00:04:22.910 19:04:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.910 19:04:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.846 19:05:01 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:23.846 19:05:01 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:04:23.846 19:05:01 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:04:23.846 19:05:01 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:04:23.846 19:05:01 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:04:23.846 19:05:01 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0c0n1 00:04:23.846 19:05:01 -- common/autotest_common.sh@1645 -- # local device=nvme0c0n1 00:04:23.846 19:05:01 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:04:23.846 19:05:01 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:04:23.846 19:05:01 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:04:23.846 19:05:01 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:04:23.846 19:05:01 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:04:23.846 19:05:01 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:04:23.846 19:05:01 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:04:23.846 19:05:01 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n2 00:04:23.846 19:05:01 -- common/autotest_common.sh@1645 -- # local device=nvme1n2 00:04:23.846 19:05:01 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:04:23.846 19:05:01 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n3 00:04:23.846 19:05:01 -- common/autotest_common.sh@1645 -- # local device=nvme1n3 00:04:23.846 19:05:01 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:04:23.846 19:05:01 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme2n1 00:04:23.846 19:05:01 -- common/autotest_common.sh@1645 -- # local device=nvme2n1 00:04:23.846 19:05:01 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:04:23.846 19:05:01 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme3n1 00:04:23.846 19:05:01 -- common/autotest_common.sh@1645 -- # local device=nvme3n1 00:04:23.846 19:05:01 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:23.846 19:05:01 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:04:23.846 19:05:01 -- setup/devices.sh@196 -- # blocks=() 00:04:23.846 19:05:01 -- setup/devices.sh@196 -- # declare -a blocks 00:04:23.846 19:05:01 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:23.846 19:05:01 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:23.846 19:05:01 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:23.846 19:05:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.846 19:05:01 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:23.846 19:05:01 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:23.846 19:05:01 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:23.846 19:05:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:23.846 19:05:01 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:23.846 19:05:01 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:23.846 19:05:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:23.846 No valid GPT data, bailing 00:04:23.846 19:05:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.846 19:05:01 -- scripts/common.sh@393 -- # pt= 00:04:23.846 19:05:01 -- scripts/common.sh@394 -- # return 1 00:04:23.846 19:05:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:23.846 19:05:01 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:23.846 19:05:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:23.846 19:05:01 -- setup/common.sh@80 -- # echo 1073741824 00:04:23.846 19:05:01 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:23.846 19:05:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.846 19:05:01 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:23.846 19:05:01 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:23.846 19:05:01 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:23.846 19:05:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:23.846 19:05:01 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:23.846 19:05:01 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:23.846 19:05:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:23.846 No valid GPT data, bailing 00:04:23.846 19:05:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:23.846 19:05:01 -- scripts/common.sh@393 -- # pt= 00:04:23.846 19:05:01 -- scripts/common.sh@394 -- # return 1 00:04:23.846 19:05:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:23.846 19:05:01 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:23.846 19:05:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:23.846 19:05:01 -- setup/common.sh@80 -- # echo 4294967296 00:04:23.846 19:05:01 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.846 19:05:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.846 19:05:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:23.846 19:05:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.846 19:05:01 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:23.846 19:05:01 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:23.846 19:05:01 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:23.846 19:05:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:23.846 19:05:01 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:23.846 19:05:01 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:23.846 19:05:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:23.846 No valid GPT data, bailing 00:04:23.846 19:05:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:23.846 19:05:01 -- scripts/common.sh@393 -- # pt= 00:04:23.846 19:05:01 -- scripts/common.sh@394 -- # return 1 00:04:23.846 19:05:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:23.846 19:05:01 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:23.846 19:05:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:23.846 19:05:01 -- setup/common.sh@80 -- # echo 4294967296 00:04:23.846 19:05:01 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.846 19:05:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.846 19:05:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:23.846 19:05:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.846 19:05:01 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:23.846 19:05:01 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:23.846 19:05:01 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:23.846 19:05:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:23.846 19:05:01 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:23.846 19:05:01 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:23.846 19:05:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:24.106 No valid GPT data, bailing 00:04:24.106 19:05:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:24.106 19:05:01 -- scripts/common.sh@393 -- # pt= 00:04:24.106 19:05:01 -- scripts/common.sh@394 -- # return 1 00:04:24.106 19:05:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:24.106 19:05:01 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:24.106 19:05:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:24.106 19:05:01 -- setup/common.sh@80 -- # echo 4294967296 00:04:24.106 19:05:01 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:24.106 19:05:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.106 19:05:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:24.106 19:05:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.106 19:05:01 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:24.106 19:05:01 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:24.106 19:05:01 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:24.106 19:05:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:24.106 19:05:01 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:24.106 19:05:01 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:24.106 19:05:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:24.106 No valid GPT data, bailing 00:04:24.106 19:05:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:24.106 19:05:01 -- scripts/common.sh@393 -- # pt= 00:04:24.106 19:05:01 -- scripts/common.sh@394 -- # return 1 00:04:24.106 19:05:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:24.106 19:05:01 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:24.106 19:05:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:24.106 19:05:01 -- setup/common.sh@80 -- # echo 6343335936 00:04:24.106 19:05:01 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:24.106 19:05:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.106 19:05:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:24.106 19:05:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.106 19:05:01 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:24.106 19:05:01 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:24.106 19:05:01 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:24.106 19:05:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:24.106 19:05:01 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:24.106 19:05:01 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:24.106 19:05:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:24.106 No valid GPT data, bailing 00:04:24.106 19:05:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:24.106 19:05:01 -- scripts/common.sh@393 -- # pt= 00:04:24.106 19:05:01 -- scripts/common.sh@394 -- # return 1 00:04:24.106 19:05:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:24.106 19:05:01 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:24.106 19:05:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:24.106 19:05:01 -- setup/common.sh@80 -- # echo 5368709120 00:04:24.106 19:05:01 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:24.106 19:05:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.106 19:05:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:24.106 19:05:01 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:24.106 19:05:01 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:24.106 19:05:01 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:24.106 19:05:01 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:24.106 19:05:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:24.106 19:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:24.106 ************************************ 00:04:24.106 START TEST nvme_mount 00:04:24.106 ************************************ 00:04:24.106 19:05:01 -- common/autotest_common.sh@1102 -- # nvme_mount 00:04:24.106 19:05:01 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:24.106 19:05:01 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:24.106 19:05:01 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.106 19:05:01 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:24.106 19:05:01 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:24.106 19:05:01 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:24.106 19:05:01 -- setup/common.sh@40 -- # local part_no=1 00:04:24.106 19:05:01 -- setup/common.sh@41 -- # local size=1073741824 00:04:24.106 19:05:01 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.106 19:05:01 -- setup/common.sh@44 -- # parts=() 00:04:24.106 19:05:01 -- setup/common.sh@44 -- # local parts 00:04:24.106 19:05:01 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.106 19:05:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.106 19:05:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.106 19:05:01 -- setup/common.sh@46 -- # (( part++ )) 00:04:24.106 19:05:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.106 19:05:01 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:24.106 19:05:01 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:24.106 19:05:01 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:25.483 Creating new GPT entries in memory. 00:04:25.483 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.483 other utilities. 00:04:25.483 19:05:02 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.483 19:05:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.483 19:05:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.483 19:05:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.483 19:05:02 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:26.421 Creating new GPT entries in memory. 00:04:26.421 The operation has completed successfully. 00:04:26.421 19:05:03 -- setup/common.sh@57 -- # (( part++ )) 00:04:26.421 19:05:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.421 19:05:03 -- setup/common.sh@62 -- # wait 54224 00:04:26.421 19:05:03 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.421 19:05:03 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:26.421 19:05:03 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.421 19:05:03 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:26.421 19:05:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:26.421 19:05:03 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.421 19:05:03 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.421 19:05:03 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:26.421 19:05:03 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:26.421 19:05:03 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.421 19:05:03 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.421 19:05:03 -- setup/devices.sh@53 -- # local found=0 00:04:26.421 19:05:03 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.421 19:05:03 -- setup/devices.sh@56 -- # : 00:04:26.421 19:05:03 -- setup/devices.sh@59 -- # local pci status 00:04:26.421 19:05:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.421 19:05:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:26.421 19:05:03 -- setup/devices.sh@47 -- # setup output config 00:04:26.421 19:05:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.421 19:05:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.421 19:05:03 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:26.421 19:05:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.679 19:05:03 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:26.679 19:05:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.938 19:05:04 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:26.938 19:05:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:26.938 19:05:04 -- setup/devices.sh@63 -- # found=1 00:04:26.938 19:05:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.938 19:05:04 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:26.938 19:05:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.938 lsblk: /dev/nvme0c0n1: not a block device 00:04:27.197 19:05:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.197 19:05:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.197 19:05:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.197 19:05:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.197 19:05:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.197 19:05:04 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:27.197 19:05:04 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.197 19:05:04 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.197 19:05:04 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:27.197 19:05:04 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:27.197 19:05:04 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.197 19:05:04 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.197 19:05:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:27.197 19:05:04 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:27.197 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.197 19:05:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:27.197 19:05:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:27.456 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:27.456 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:27.456 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:27.456 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:27.456 19:05:04 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:27.456 19:05:04 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:27.456 19:05:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.456 19:05:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:27.456 19:05:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:27.714 19:05:04 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.714 19:05:04 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:27.714 19:05:04 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:27.714 19:05:04 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:27.714 19:05:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.714 19:05:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:27.714 19:05:04 -- setup/devices.sh@53 -- # local found=0 00:04:27.714 19:05:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.714 19:05:04 -- setup/devices.sh@56 -- # : 00:04:27.714 19:05:04 -- setup/devices.sh@59 -- # local pci status 00:04:27.714 19:05:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.714 19:05:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:27.714 19:05:04 -- setup/devices.sh@47 -- # setup output config 00:04:27.715 19:05:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.715 19:05:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.715 19:05:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.715 19:05:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.973 19:05:05 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.973 19:05:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.231 19:05:05 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.231 19:05:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:28.231 19:05:05 -- setup/devices.sh@63 -- # found=1 00:04:28.231 19:05:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.231 19:05:05 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.231 19:05:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.231 lsblk: /dev/nvme0c0n1: not a block device 00:04:28.231 19:05:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.231 19:05:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.490 19:05:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.490 19:05:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.490 19:05:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.490 19:05:05 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:28.490 19:05:05 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.490 19:05:05 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.490 19:05:05 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.490 19:05:05 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.490 19:05:05 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:28.490 19:05:05 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:28.490 19:05:05 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:28.490 19:05:05 -- setup/devices.sh@50 -- # local mount_point= 00:04:28.490 19:05:05 -- setup/devices.sh@51 -- # local test_file= 00:04:28.490 19:05:05 -- setup/devices.sh@53 -- # local found=0 00:04:28.490 19:05:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:28.490 19:05:05 -- setup/devices.sh@59 -- # local pci status 00:04:28.490 19:05:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.490 19:05:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:28.490 19:05:05 -- setup/devices.sh@47 -- # setup output config 00:04:28.490 19:05:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.490 19:05:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:28.748 19:05:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.748 19:05:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.748 19:05:06 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.748 19:05:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.314 19:05:06 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.314 19:05:06 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:29.314 19:05:06 -- setup/devices.sh@63 -- # found=1 00:04:29.314 19:05:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.314 19:05:06 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.314 19:05:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.314 lsblk: /dev/nvme0c0n1: not a block device 00:04:29.314 19:05:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.314 19:05:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.572 19:05:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.572 19:05:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.572 19:05:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.572 19:05:06 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:29.572 19:05:06 -- setup/devices.sh@68 -- # return 0 00:04:29.572 19:05:06 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:29.572 19:05:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.572 19:05:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:29.572 19:05:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:29.572 19:05:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:29.572 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.572 00:04:29.572 real 0m5.321s 00:04:29.572 user 0m1.354s 00:04:29.572 sys 0m1.683s 00:04:29.572 19:05:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.572 19:05:06 -- common/autotest_common.sh@10 -- # set +x 00:04:29.572 ************************************ 00:04:29.572 END TEST nvme_mount 00:04:29.572 ************************************ 00:04:29.572 19:05:06 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:29.572 19:05:06 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:29.572 19:05:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:29.572 19:05:06 -- common/autotest_common.sh@10 -- # set +x 00:04:29.572 ************************************ 00:04:29.572 START TEST dm_mount 00:04:29.572 ************************************ 00:04:29.572 19:05:06 -- common/autotest_common.sh@1102 -- # dm_mount 00:04:29.572 19:05:06 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:29.572 19:05:06 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:29.572 19:05:06 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:29.572 19:05:06 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:29.572 19:05:06 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:29.572 19:05:06 -- setup/common.sh@40 -- # local part_no=2 00:04:29.572 19:05:06 -- setup/common.sh@41 -- # local size=1073741824 00:04:29.572 19:05:06 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:29.572 19:05:06 -- setup/common.sh@44 -- # parts=() 00:04:29.572 19:05:06 -- setup/common.sh@44 -- # local parts 00:04:29.572 19:05:06 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:29.572 19:05:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.572 19:05:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.572 19:05:06 -- setup/common.sh@46 -- # (( part++ )) 00:04:29.572 19:05:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.572 19:05:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.572 19:05:06 -- setup/common.sh@46 -- # (( part++ )) 00:04:29.572 19:05:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.572 19:05:06 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:29.572 19:05:06 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:29.572 19:05:06 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:30.507 Creating new GPT entries in memory. 00:04:30.507 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.507 other utilities. 00:04:30.507 19:05:07 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.507 19:05:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.507 19:05:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.507 19:05:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.507 19:05:07 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:31.936 Creating new GPT entries in memory. 00:04:31.936 The operation has completed successfully. 00:04:31.936 19:05:08 -- setup/common.sh@57 -- # (( part++ )) 00:04:31.936 19:05:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.936 19:05:08 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:31.936 19:05:08 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:31.936 19:05:08 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:32.872 The operation has completed successfully. 00:04:32.872 19:05:09 -- setup/common.sh@57 -- # (( part++ )) 00:04:32.872 19:05:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.872 19:05:09 -- setup/common.sh@62 -- # wait 54942 00:04:32.872 19:05:09 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:32.872 19:05:09 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.872 19:05:09 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.872 19:05:09 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:32.872 19:05:10 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:32.872 19:05:10 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.872 19:05:10 -- setup/devices.sh@161 -- # break 00:04:32.872 19:05:10 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.872 19:05:10 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:32.872 19:05:10 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:32.872 19:05:10 -- setup/devices.sh@166 -- # dm=dm-0 00:04:32.872 19:05:10 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:32.872 19:05:10 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:32.872 19:05:10 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.872 19:05:10 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:32.872 19:05:10 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.872 19:05:10 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.872 19:05:10 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:32.872 19:05:10 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.872 19:05:10 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.872 19:05:10 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:32.872 19:05:10 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:32.872 19:05:10 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.872 19:05:10 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.872 19:05:10 -- setup/devices.sh@53 -- # local found=0 00:04:32.872 19:05:10 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.872 19:05:10 -- setup/devices.sh@56 -- # : 00:04:32.872 19:05:10 -- setup/devices.sh@59 -- # local pci status 00:04:32.872 19:05:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.872 19:05:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:32.872 19:05:10 -- setup/devices.sh@47 -- # setup output config 00:04:32.872 19:05:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.872 19:05:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.872 19:05:10 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.872 19:05:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.130 19:05:10 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.130 19:05:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.388 19:05:10 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.388 19:05:10 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:33.388 19:05:10 -- setup/devices.sh@63 -- # found=1 00:04:33.388 19:05:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.388 19:05:10 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.388 19:05:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.388 lsblk: /dev/nvme0c0n1: not a block device 00:04:33.388 19:05:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.388 19:05:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.647 19:05:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.647 19:05:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.647 19:05:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.647 19:05:10 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:33.647 19:05:10 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.647 19:05:10 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:33.647 19:05:10 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:33.647 19:05:10 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.647 19:05:10 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:33.647 19:05:10 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:33.647 19:05:10 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:33.647 19:05:10 -- setup/devices.sh@50 -- # local mount_point= 00:04:33.647 19:05:10 -- setup/devices.sh@51 -- # local test_file= 00:04:33.647 19:05:10 -- setup/devices.sh@53 -- # local found=0 00:04:33.647 19:05:10 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:33.647 19:05:10 -- setup/devices.sh@59 -- # local pci status 00:04:33.647 19:05:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.647 19:05:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:33.647 19:05:10 -- setup/devices.sh@47 -- # setup output config 00:04:33.647 19:05:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.647 19:05:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:33.905 19:05:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.906 19:05:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.906 19:05:11 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:33.906 19:05:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.164 19:05:11 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:34.164 19:05:11 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:34.164 19:05:11 -- setup/devices.sh@63 -- # found=1 00:04:34.164 19:05:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.164 19:05:11 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:34.164 19:05:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.423 lsblk: /dev/nvme0c0n1: not a block device 00:04:34.423 19:05:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:34.423 19:05:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.423 19:05:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:34.423 19:05:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.682 19:05:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.682 19:05:11 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:34.682 19:05:11 -- setup/devices.sh@68 -- # return 0 00:04:34.682 19:05:11 -- setup/devices.sh@187 -- # cleanup_dm 00:04:34.682 19:05:11 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.682 19:05:11 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:34.682 19:05:11 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:34.682 19:05:11 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:34.682 19:05:11 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:34.682 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:34.682 19:05:11 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:34.682 19:05:11 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:34.682 00:04:34.682 real 0m5.081s 00:04:34.682 user 0m0.854s 00:04:34.682 sys 0m1.186s 00:04:34.682 19:05:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.682 19:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:34.682 ************************************ 00:04:34.682 END TEST dm_mount 00:04:34.682 ************************************ 00:04:34.682 19:05:12 -- setup/devices.sh@1 -- # cleanup 00:04:34.682 19:05:12 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:34.682 19:05:12 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.682 19:05:12 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:34.682 19:05:12 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:34.682 19:05:12 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:34.682 19:05:12 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:34.941 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:34.941 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:34.941 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:34.941 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:34.941 19:05:12 -- setup/devices.sh@12 -- # cleanup_dm 00:04:34.941 19:05:12 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.941 19:05:12 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:34.941 19:05:12 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:34.941 19:05:12 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:34.941 19:05:12 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:34.941 19:05:12 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:34.941 00:04:34.941 real 0m12.583s 00:04:34.941 user 0m3.171s 00:04:34.941 sys 0m3.778s 00:04:34.941 19:05:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.941 19:05:12 -- common/autotest_common.sh@10 -- # set +x 00:04:34.941 ************************************ 00:04:34.941 END TEST devices 00:04:34.941 ************************************ 00:04:34.941 00:04:34.941 real 0m43.964s 00:04:34.941 user 0m10.202s 00:04:34.941 sys 0m13.629s 00:04:34.941 19:05:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.941 ************************************ 00:04:34.941 19:05:12 -- common/autotest_common.sh@10 -- # set +x 00:04:34.941 END TEST setup.sh 00:04:34.941 ************************************ 00:04:35.200 19:05:12 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:35.200 Hugepages 00:04:35.200 node hugesize free / total 00:04:35.200 node0 1048576kB 0 / 0 00:04:35.200 node0 2048kB 2048 / 2048 00:04:35.200 00:04:35.200 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.459 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:35.459 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:35.459 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:35.718 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:35.718 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0c0n1 00:04:35.718 19:05:13 -- spdk/autotest.sh@141 -- # uname -s 00:04:35.718 19:05:13 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:35.718 19:05:13 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:35.718 19:05:13 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.654 lsblk: /dev/nvme0c0n1: not a block device 00:04:36.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.912 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.912 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.170 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.170 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.170 19:05:14 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:38.106 19:05:15 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:38.106 19:05:15 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:38.106 19:05:15 -- common/autotest_common.sh@1517 -- # bdfs=($(get_nvme_bdfs)) 00:04:38.106 19:05:15 -- common/autotest_common.sh@1517 -- # get_nvme_bdfs 00:04:38.106 19:05:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:38.106 19:05:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:38.106 19:05:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:38.106 19:05:15 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:38.106 19:05:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:38.365 19:05:15 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:38.365 19:05:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:38.365 19:05:15 -- common/autotest_common.sh@1519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.895 Waiting for block devices as requested 00:04:38.895 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:38.895 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:38.895 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.154 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:44.426 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:44.426 19:05:21 -- common/autotest_common.sh@1521 -- # for bdf in "${bdfs[@]}" 00:04:44.426 19:05:21 -- common/autotest_common.sh@1522 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:44.426 19:05:21 -- common/autotest_common.sh@1485 -- # grep 0000:00:06.0/nvme/nvme 00:04:44.426 19:05:21 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:44.426 19:05:21 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:44.426 19:05:21 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:44.426 19:05:21 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:44.426 19:05:21 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:44.426 19:05:21 -- common/autotest_common.sh@1522 -- # nvme_ctrlr=/dev/nvme2 00:04:44.426 19:05:21 -- common/autotest_common.sh@1523 -- # [[ -z /dev/nvme2 ]] 00:04:44.426 19:05:21 -- common/autotest_common.sh@1528 -- # nvme id-ctrl /dev/nvme2 00:04:44.426 19:05:21 -- common/autotest_common.sh@1528 -- # grep oacs 00:04:44.426 19:05:21 -- common/autotest_common.sh@1528 -- # cut -d: -f2 00:04:44.426 19:05:21 -- common/autotest_common.sh@1528 -- # oacs=' 0x12a' 00:04:44.426 19:05:21 -- common/autotest_common.sh@1529 -- # oacs_ns_manage=8 00:04:44.426 19:05:21 -- common/autotest_common.sh@1531 -- # [[ 8 -ne 0 ]] 00:04:44.426 19:05:21 -- common/autotest_common.sh@1537 -- # nvme id-ctrl /dev/nvme2 00:04:44.426 19:05:21 -- common/autotest_common.sh@1537 -- # grep unvmcap 00:04:44.426 19:05:21 -- common/autotest_common.sh@1537 -- # cut -d: -f2 00:04:44.426 19:05:21 -- common/autotest_common.sh@1537 -- # unvmcap=' 0' 00:04:44.426 19:05:21 -- common/autotest_common.sh@1538 -- # [[ 0 -eq 0 ]] 00:04:44.426 19:05:21 -- common/autotest_common.sh@1540 -- # continue 00:04:44.426 19:05:21 -- common/autotest_common.sh@1521 -- # for bdf in "${bdfs[@]}" 00:04:44.426 19:05:21 -- common/autotest_common.sh@1522 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:44.426 19:05:21 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:44.426 19:05:21 -- common/autotest_common.sh@1485 -- # grep 0000:00:07.0/nvme/nvme 00:04:44.426 19:05:21 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:44.426 19:05:21 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:44.426 19:05:21 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:44.426 19:05:21 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:44.426 19:05:21 -- common/autotest_common.sh@1522 -- # nvme_ctrlr=/dev/nvme3 00:04:44.426 19:05:21 -- common/autotest_common.sh@1523 -- # [[ -z /dev/nvme3 ]] 00:04:44.426 19:05:21 -- common/autotest_common.sh@1528 -- # nvme id-ctrl /dev/nvme3 00:04:44.426 19:05:21 -- common/autotest_common.sh@1528 -- # grep oacs 00:04:44.426 19:05:21 -- common/autotest_common.sh@1528 -- # cut -d: -f2 00:04:44.426 19:05:21 -- common/autotest_common.sh@1528 -- # oacs=' 0x12a' 00:04:44.426 19:05:21 -- common/autotest_common.sh@1529 -- # oacs_ns_manage=8 00:04:44.426 19:05:21 -- common/autotest_common.sh@1531 -- # [[ 8 -ne 0 ]] 00:04:44.426 19:05:21 -- common/autotest_common.sh@1537 -- # nvme id-ctrl /dev/nvme3 00:04:44.426 19:05:21 -- common/autotest_common.sh@1537 -- # grep unvmcap 00:04:44.426 19:05:21 -- common/autotest_common.sh@1537 -- # cut -d: -f2 00:04:44.426 19:05:21 -- common/autotest_common.sh@1537 -- # unvmcap=' 0' 00:04:44.426 19:05:21 -- common/autotest_common.sh@1538 -- # [[ 0 -eq 0 ]] 00:04:44.426 19:05:21 -- common/autotest_common.sh@1540 -- # continue 00:04:44.426 19:05:21 -- common/autotest_common.sh@1521 -- # for bdf in "${bdfs[@]}" 00:04:44.426 19:05:21 -- common/autotest_common.sh@1522 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:44.426 19:05:21 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:44.426 19:05:21 -- common/autotest_common.sh@1485 -- # grep 0000:00:08.0/nvme/nvme 00:04:44.426 19:05:21 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:44.427 19:05:21 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:44.427 19:05:21 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:44.427 19:05:21 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:44.427 19:05:21 -- common/autotest_common.sh@1522 -- # nvme_ctrlr=/dev/nvme1 00:04:44.427 19:05:21 -- common/autotest_common.sh@1523 -- # [[ -z /dev/nvme1 ]] 00:04:44.427 19:05:21 -- common/autotest_common.sh@1528 -- # nvme id-ctrl /dev/nvme1 00:04:44.427 19:05:21 -- common/autotest_common.sh@1528 -- # grep oacs 00:04:44.427 19:05:21 -- common/autotest_common.sh@1528 -- # cut -d: -f2 00:04:44.427 19:05:21 -- common/autotest_common.sh@1528 -- # oacs=' 0x12a' 00:04:44.427 19:05:21 -- common/autotest_common.sh@1529 -- # oacs_ns_manage=8 00:04:44.427 19:05:21 -- common/autotest_common.sh@1531 -- # [[ 8 -ne 0 ]] 00:04:44.427 19:05:21 -- common/autotest_common.sh@1537 -- # grep unvmcap 00:04:44.427 19:05:21 -- common/autotest_common.sh@1537 -- # nvme id-ctrl /dev/nvme1 00:04:44.427 19:05:21 -- common/autotest_common.sh@1537 -- # cut -d: -f2 00:04:44.427 19:05:21 -- common/autotest_common.sh@1537 -- # unvmcap=' 0' 00:04:44.427 19:05:21 -- common/autotest_common.sh@1538 -- # [[ 0 -eq 0 ]] 00:04:44.427 19:05:21 -- common/autotest_common.sh@1540 -- # continue 00:04:44.427 19:05:21 -- common/autotest_common.sh@1521 -- # for bdf in "${bdfs[@]}" 00:04:44.427 19:05:21 -- common/autotest_common.sh@1522 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:44.427 19:05:21 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:44.427 19:05:21 -- common/autotest_common.sh@1485 -- # grep 0000:00:09.0/nvme/nvme 00:04:44.427 19:05:21 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:44.427 19:05:21 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:44.427 19:05:21 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:44.427 19:05:21 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:44.427 19:05:21 -- common/autotest_common.sh@1522 -- # nvme_ctrlr=/dev/nvme0 00:04:44.427 19:05:21 -- common/autotest_common.sh@1523 -- # [[ -z /dev/nvme0 ]] 00:04:44.427 19:05:21 -- common/autotest_common.sh@1528 -- # nvme id-ctrl /dev/nvme0 00:04:44.427 19:05:21 -- common/autotest_common.sh@1528 -- # grep oacs 00:04:44.427 19:05:21 -- common/autotest_common.sh@1528 -- # cut -d: -f2 00:04:44.427 19:05:21 -- common/autotest_common.sh@1528 -- # oacs=' 0x12a' 00:04:44.427 19:05:21 -- common/autotest_common.sh@1529 -- # oacs_ns_manage=8 00:04:44.427 19:05:21 -- common/autotest_common.sh@1531 -- # [[ 8 -ne 0 ]] 00:04:44.427 19:05:21 -- common/autotest_common.sh@1537 -- # nvme id-ctrl /dev/nvme0 00:04:44.427 19:05:21 -- common/autotest_common.sh@1537 -- # grep unvmcap 00:04:44.427 19:05:21 -- common/autotest_common.sh@1537 -- # cut -d: -f2 00:04:44.427 19:05:21 -- common/autotest_common.sh@1537 -- # unvmcap=' 0' 00:04:44.427 19:05:21 -- common/autotest_common.sh@1538 -- # [[ 0 -eq 0 ]] 00:04:44.427 19:05:21 -- common/autotest_common.sh@1540 -- # continue 00:04:44.427 19:05:21 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:44.427 19:05:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:44.427 19:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:44.427 19:05:21 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:44.427 19:05:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:44.427 19:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:44.427 19:05:21 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.363 lsblk: /dev/nvme0c0n1: not a block device 00:04:45.363 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.621 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.622 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.622 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.622 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.622 19:05:22 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:45.622 19:05:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:45.622 19:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.622 19:05:23 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:45.622 19:05:23 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:04:45.622 19:05:23 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:04:45.622 19:05:23 -- common/autotest_common.sh@1560 -- # bdfs=() 00:04:45.622 19:05:23 -- common/autotest_common.sh@1560 -- # local bdfs 00:04:45.622 19:05:23 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:45.622 19:05:23 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:45.622 19:05:23 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:45.622 19:05:23 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.622 19:05:23 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:45.622 19:05:23 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:45.881 19:05:23 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:45.881 19:05:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:45.881 19:05:23 -- common/autotest_common.sh@1562 -- # for bdf in $(get_nvme_bdfs) 00:04:45.881 19:05:23 -- common/autotest_common.sh@1563 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:45.881 19:05:23 -- common/autotest_common.sh@1563 -- # device=0x0010 00:04:45.881 19:05:23 -- common/autotest_common.sh@1564 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.881 19:05:23 -- common/autotest_common.sh@1562 -- # for bdf in $(get_nvme_bdfs) 00:04:45.881 19:05:23 -- common/autotest_common.sh@1563 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:45.881 19:05:23 -- common/autotest_common.sh@1563 -- # device=0x0010 00:04:45.881 19:05:23 -- common/autotest_common.sh@1564 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.881 19:05:23 -- common/autotest_common.sh@1562 -- # for bdf in $(get_nvme_bdfs) 00:04:45.881 19:05:23 -- common/autotest_common.sh@1563 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:45.881 19:05:23 -- common/autotest_common.sh@1563 -- # device=0x0010 00:04:45.881 19:05:23 -- common/autotest_common.sh@1564 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.881 19:05:23 -- common/autotest_common.sh@1562 -- # for bdf in $(get_nvme_bdfs) 00:04:45.881 19:05:23 -- common/autotest_common.sh@1563 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:45.881 19:05:23 -- common/autotest_common.sh@1563 -- # device=0x0010 00:04:45.881 19:05:23 -- common/autotest_common.sh@1564 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.881 19:05:23 -- common/autotest_common.sh@1569 -- # printf '%s\n' 00:04:45.881 19:05:23 -- common/autotest_common.sh@1575 -- # [[ -z '' ]] 00:04:45.881 19:05:23 -- common/autotest_common.sh@1576 -- # return 0 00:04:45.881 19:05:23 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:45.881 19:05:23 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:45.881 19:05:23 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:45.881 19:05:23 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:45.881 19:05:23 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:45.881 19:05:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:45.881 19:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:45.881 19:05:23 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:45.881 19:05:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:45.881 19:05:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:45.881 19:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:45.881 ************************************ 00:04:45.881 START TEST env 00:04:45.881 ************************************ 00:04:45.881 19:05:23 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:45.881 * Looking for test storage... 00:04:45.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:45.881 19:05:23 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:45.881 19:05:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:45.881 19:05:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:45.881 19:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:45.881 ************************************ 00:04:45.881 START TEST env_memory 00:04:45.881 ************************************ 00:04:45.881 19:05:23 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:45.881 00:04:45.881 00:04:45.881 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.881 http://cunit.sourceforge.net/ 00:04:45.881 00:04:45.881 00:04:45.881 Suite: memory 00:04:45.881 Test: alloc and free memory map ...[2024-02-14 19:05:23.298440] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:46.139 passed 00:04:46.139 Test: mem map translation ...[2024-02-14 19:05:23.360384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:46.139 [2024-02-14 19:05:23.360470] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:46.139 [2024-02-14 19:05:23.360603] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:46.139 [2024-02-14 19:05:23.360635] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:46.139 passed 00:04:46.139 Test: mem map registration ...[2024-02-14 19:05:23.459690] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:46.139 [2024-02-14 19:05:23.459838] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:46.139 passed 00:04:46.398 Test: mem map adjacent registrations ...passed 00:04:46.398 00:04:46.398 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.398 suites 1 1 n/a 0 0 00:04:46.398 tests 4 4 4 0 0 00:04:46.398 asserts 152 152 152 0 n/a 00:04:46.398 00:04:46.398 Elapsed time = 0.347 seconds 00:04:46.398 00:04:46.398 real 0m0.385s 00:04:46.398 user 0m0.352s 00:04:46.398 sys 0m0.030s 00:04:46.398 ************************************ 00:04:46.398 END TEST env_memory 00:04:46.398 ************************************ 00:04:46.398 19:05:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.398 19:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:46.398 19:05:23 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.398 19:05:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:46.398 19:05:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:46.398 19:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:46.398 ************************************ 00:04:46.398 START TEST env_vtophys 00:04:46.398 ************************************ 00:04:46.398 19:05:23 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.398 EAL: lib.eal log level changed from notice to debug 00:04:46.398 EAL: Detected lcore 0 as core 0 on socket 0 00:04:46.398 EAL: Detected lcore 1 as core 0 on socket 0 00:04:46.398 EAL: Detected lcore 2 as core 0 on socket 0 00:04:46.398 EAL: Detected lcore 3 as core 0 on socket 0 00:04:46.398 EAL: Detected lcore 4 as core 0 on socket 0 00:04:46.398 EAL: Detected lcore 5 as core 0 on socket 0 00:04:46.398 EAL: Detected lcore 6 as core 0 on socket 0 00:04:46.398 EAL: Detected lcore 7 as core 0 on socket 0 00:04:46.398 EAL: Detected lcore 8 as core 0 on socket 0 00:04:46.398 EAL: Detected lcore 9 as core 0 on socket 0 00:04:46.398 EAL: Maximum logical cores by configuration: 128 00:04:46.398 EAL: Detected CPU lcores: 10 00:04:46.398 EAL: Detected NUMA nodes: 1 00:04:46.398 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:46.398 EAL: Detected shared linkage of DPDK 00:04:46.398 EAL: No shared files mode enabled, IPC will be disabled 00:04:46.398 EAL: Selected IOVA mode 'PA' 00:04:46.398 EAL: Probing VFIO support... 00:04:46.398 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:46.398 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:46.398 EAL: Ask a virtual area of 0x2e000 bytes 00:04:46.398 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:46.398 EAL: Setting up physically contiguous memory... 00:04:46.398 EAL: Setting maximum number of open files to 524288 00:04:46.398 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:46.398 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:46.398 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.398 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:46.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.398 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.398 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:46.398 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:46.398 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.398 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:46.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.398 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.398 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:46.398 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:46.398 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.398 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:46.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.398 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.398 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:46.398 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:46.398 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.398 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:46.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.398 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.398 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:46.398 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:46.398 EAL: Hugepages will be freed exactly as allocated. 00:04:46.398 EAL: No shared files mode enabled, IPC is disabled 00:04:46.398 EAL: No shared files mode enabled, IPC is disabled 00:04:46.658 EAL: TSC frequency is ~2200000 KHz 00:04:46.658 EAL: Main lcore 0 is ready (tid=7f22072d8a40;cpuset=[0]) 00:04:46.658 EAL: Trying to obtain current memory policy. 00:04:46.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.658 EAL: Restoring previous memory policy: 0 00:04:46.658 EAL: request: mp_malloc_sync 00:04:46.658 EAL: No shared files mode enabled, IPC is disabled 00:04:46.658 EAL: Heap on socket 0 was expanded by 2MB 00:04:46.658 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:46.658 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:46.658 EAL: Mem event callback 'spdk:(nil)' registered 00:04:46.658 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:46.658 00:04:46.658 00:04:46.658 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.658 http://cunit.sourceforge.net/ 00:04:46.658 00:04:46.658 00:04:46.658 Suite: components_suite 00:04:46.917 Test: vtophys_malloc_test ...passed 00:04:46.917 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:46.917 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.917 EAL: Restoring previous memory policy: 4 00:04:46.917 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.917 EAL: request: mp_malloc_sync 00:04:46.917 EAL: No shared files mode enabled, IPC is disabled 00:04:46.917 EAL: Heap on socket 0 was expanded by 4MB 00:04:46.917 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.917 EAL: request: mp_malloc_sync 00:04:46.917 EAL: No shared files mode enabled, IPC is disabled 00:04:46.917 EAL: Heap on socket 0 was shrunk by 4MB 00:04:46.917 EAL: Trying to obtain current memory policy. 00:04:46.917 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.917 EAL: Restoring previous memory policy: 4 00:04:46.917 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.917 EAL: request: mp_malloc_sync 00:04:46.917 EAL: No shared files mode enabled, IPC is disabled 00:04:46.917 EAL: Heap on socket 0 was expanded by 6MB 00:04:46.917 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.917 EAL: request: mp_malloc_sync 00:04:46.917 EAL: No shared files mode enabled, IPC is disabled 00:04:46.917 EAL: Heap on socket 0 was shrunk by 6MB 00:04:46.917 EAL: Trying to obtain current memory policy. 00:04:46.917 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.917 EAL: Restoring previous memory policy: 4 00:04:46.917 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.917 EAL: request: mp_malloc_sync 00:04:46.917 EAL: No shared files mode enabled, IPC is disabled 00:04:46.917 EAL: Heap on socket 0 was expanded by 10MB 00:04:47.176 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.176 EAL: request: mp_malloc_sync 00:04:47.176 EAL: No shared files mode enabled, IPC is disabled 00:04:47.176 EAL: Heap on socket 0 was shrunk by 10MB 00:04:47.176 EAL: Trying to obtain current memory policy. 00:04:47.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.176 EAL: Restoring previous memory policy: 4 00:04:47.176 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.176 EAL: request: mp_malloc_sync 00:04:47.176 EAL: No shared files mode enabled, IPC is disabled 00:04:47.176 EAL: Heap on socket 0 was expanded by 18MB 00:04:47.176 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.176 EAL: request: mp_malloc_sync 00:04:47.176 EAL: No shared files mode enabled, IPC is disabled 00:04:47.176 EAL: Heap on socket 0 was shrunk by 18MB 00:04:47.176 EAL: Trying to obtain current memory policy. 00:04:47.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.176 EAL: Restoring previous memory policy: 4 00:04:47.176 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.176 EAL: request: mp_malloc_sync 00:04:47.176 EAL: No shared files mode enabled, IPC is disabled 00:04:47.176 EAL: Heap on socket 0 was expanded by 34MB 00:04:47.176 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.176 EAL: request: mp_malloc_sync 00:04:47.176 EAL: No shared files mode enabled, IPC is disabled 00:04:47.176 EAL: Heap on socket 0 was shrunk by 34MB 00:04:47.176 EAL: Trying to obtain current memory policy. 00:04:47.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.176 EAL: Restoring previous memory policy: 4 00:04:47.176 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.176 EAL: request: mp_malloc_sync 00:04:47.176 EAL: No shared files mode enabled, IPC is disabled 00:04:47.176 EAL: Heap on socket 0 was expanded by 66MB 00:04:47.435 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.435 EAL: request: mp_malloc_sync 00:04:47.435 EAL: No shared files mode enabled, IPC is disabled 00:04:47.435 EAL: Heap on socket 0 was shrunk by 66MB 00:04:47.435 EAL: Trying to obtain current memory policy. 00:04:47.435 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.435 EAL: Restoring previous memory policy: 4 00:04:47.435 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.435 EAL: request: mp_malloc_sync 00:04:47.435 EAL: No shared files mode enabled, IPC is disabled 00:04:47.435 EAL: Heap on socket 0 was expanded by 130MB 00:04:47.693 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.693 EAL: request: mp_malloc_sync 00:04:47.693 EAL: No shared files mode enabled, IPC is disabled 00:04:47.693 EAL: Heap on socket 0 was shrunk by 130MB 00:04:47.693 EAL: Trying to obtain current memory policy. 00:04:47.693 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.952 EAL: Restoring previous memory policy: 4 00:04:47.952 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.952 EAL: request: mp_malloc_sync 00:04:47.952 EAL: No shared files mode enabled, IPC is disabled 00:04:47.952 EAL: Heap on socket 0 was expanded by 258MB 00:04:48.209 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.209 EAL: request: mp_malloc_sync 00:04:48.209 EAL: No shared files mode enabled, IPC is disabled 00:04:48.209 EAL: Heap on socket 0 was shrunk by 258MB 00:04:48.467 EAL: Trying to obtain current memory policy. 00:04:48.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.725 EAL: Restoring previous memory policy: 4 00:04:48.725 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.725 EAL: request: mp_malloc_sync 00:04:48.725 EAL: No shared files mode enabled, IPC is disabled 00:04:48.725 EAL: Heap on socket 0 was expanded by 514MB 00:04:49.292 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.292 EAL: request: mp_malloc_sync 00:04:49.292 EAL: No shared files mode enabled, IPC is disabled 00:04:49.292 EAL: Heap on socket 0 was shrunk by 514MB 00:04:50.226 EAL: Trying to obtain current memory policy. 00:04:50.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.226 EAL: Restoring previous memory policy: 4 00:04:50.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.226 EAL: request: mp_malloc_sync 00:04:50.226 EAL: No shared files mode enabled, IPC is disabled 00:04:50.226 EAL: Heap on socket 0 was expanded by 1026MB 00:04:51.600 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.600 EAL: request: mp_malloc_sync 00:04:51.600 EAL: No shared files mode enabled, IPC is disabled 00:04:51.600 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:52.975 passed 00:04:52.975 00:04:52.975 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.975 suites 1 1 n/a 0 0 00:04:52.975 tests 2 2 2 0 0 00:04:52.975 asserts 5474 5474 5474 0 n/a 00:04:52.975 00:04:52.975 Elapsed time = 6.272 seconds 00:04:52.975 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.975 EAL: request: mp_malloc_sync 00:04:52.975 EAL: No shared files mode enabled, IPC is disabled 00:04:52.975 EAL: Heap on socket 0 was shrunk by 2MB 00:04:52.975 EAL: No shared files mode enabled, IPC is disabled 00:04:52.975 EAL: No shared files mode enabled, IPC is disabled 00:04:52.975 EAL: No shared files mode enabled, IPC is disabled 00:04:52.975 00:04:52.975 real 0m6.587s 00:04:52.975 user 0m5.720s 00:04:52.975 sys 0m0.711s 00:04:52.975 19:05:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.975 ************************************ 00:04:52.976 END TEST env_vtophys 00:04:52.976 ************************************ 00:04:52.976 19:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:52.976 19:05:30 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:52.976 19:05:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:52.976 19:05:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:52.976 19:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:52.976 ************************************ 00:04:52.976 START TEST env_pci 00:04:52.976 ************************************ 00:04:52.976 19:05:30 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:52.976 00:04:52.976 00:04:52.976 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.976 http://cunit.sourceforge.net/ 00:04:52.976 00:04:52.976 00:04:52.976 Suite: pci 00:04:52.976 Test: pci_hook ...[2024-02-14 19:05:30.340096] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56818 has claimed it 00:04:52.976 passed 00:04:52.976 00:04:52.976 EAL: Cannot find device (10000:00:01.0) 00:04:52.976 EAL: Failed to attach device on primary process 00:04:52.976 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.976 suites 1 1 n/a 0 0 00:04:52.976 tests 1 1 1 0 0 00:04:52.976 asserts 25 25 25 0 n/a 00:04:52.976 00:04:52.976 Elapsed time = 0.008 seconds 00:04:52.976 00:04:52.976 real 0m0.072s 00:04:52.976 user 0m0.036s 00:04:52.976 sys 0m0.035s 00:04:52.976 19:05:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.976 ************************************ 00:04:52.976 END TEST env_pci 00:04:52.976 ************************************ 00:04:52.976 19:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:53.234 19:05:30 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:53.234 19:05:30 -- env/env.sh@15 -- # uname 00:04:53.234 19:05:30 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:53.234 19:05:30 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:53.234 19:05:30 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:53.234 19:05:30 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:04:53.234 19:05:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:53.234 19:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:53.234 ************************************ 00:04:53.234 START TEST env_dpdk_post_init 00:04:53.234 ************************************ 00:04:53.234 19:05:30 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:53.234 EAL: Detected CPU lcores: 10 00:04:53.234 EAL: Detected NUMA nodes: 1 00:04:53.234 EAL: Detected shared linkage of DPDK 00:04:53.234 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:53.234 EAL: Selected IOVA mode 'PA' 00:04:53.234 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:53.234 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:53.234 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:53.493 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:53.493 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:53.493 Starting DPDK initialization... 00:04:53.493 Starting SPDK post initialization... 00:04:53.493 SPDK NVMe probe 00:04:53.493 Attaching to 0000:00:06.0 00:04:53.493 Attaching to 0000:00:07.0 00:04:53.493 Attaching to 0000:00:08.0 00:04:53.493 Attaching to 0000:00:09.0 00:04:53.493 Attached to 0000:00:06.0 00:04:53.493 Attached to 0000:00:07.0 00:04:53.493 Attached to 0000:00:09.0 00:04:53.493 Attached to 0000:00:08.0 00:04:53.493 Cleaning up... 00:04:53.493 00:04:53.493 real 0m0.275s 00:04:53.493 user 0m0.097s 00:04:53.493 sys 0m0.081s 00:04:53.493 19:05:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.493 19:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:53.493 ************************************ 00:04:53.493 END TEST env_dpdk_post_init 00:04:53.493 ************************************ 00:04:53.493 19:05:30 -- env/env.sh@26 -- # uname 00:04:53.493 19:05:30 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:53.493 19:05:30 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:53.493 19:05:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:53.493 19:05:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:53.493 19:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:53.493 ************************************ 00:04:53.493 START TEST env_mem_callbacks 00:04:53.493 ************************************ 00:04:53.493 19:05:30 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:53.493 EAL: Detected CPU lcores: 10 00:04:53.493 EAL: Detected NUMA nodes: 1 00:04:53.493 EAL: Detected shared linkage of DPDK 00:04:53.493 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:53.493 EAL: Selected IOVA mode 'PA' 00:04:53.752 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:53.752 00:04:53.752 00:04:53.752 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.752 http://cunit.sourceforge.net/ 00:04:53.752 00:04:53.752 00:04:53.752 Suite: memory 00:04:53.752 Test: test ... 00:04:53.752 register 0x200000200000 2097152 00:04:53.752 malloc 3145728 00:04:53.752 register 0x200000400000 4194304 00:04:53.752 buf 0x2000004fffc0 len 3145728 PASSED 00:04:53.752 malloc 64 00:04:53.752 buf 0x2000004ffec0 len 64 PASSED 00:04:53.752 malloc 4194304 00:04:53.752 register 0x200000800000 6291456 00:04:53.752 buf 0x2000009fffc0 len 4194304 PASSED 00:04:53.752 free 0x2000004fffc0 3145728 00:04:53.752 free 0x2000004ffec0 64 00:04:53.752 unregister 0x200000400000 4194304 PASSED 00:04:53.752 free 0x2000009fffc0 4194304 00:04:53.752 unregister 0x200000800000 6291456 PASSED 00:04:53.752 malloc 8388608 00:04:53.752 register 0x200000400000 10485760 00:04:53.752 buf 0x2000005fffc0 len 8388608 PASSED 00:04:53.752 free 0x2000005fffc0 8388608 00:04:53.752 unregister 0x200000400000 10485760 PASSED 00:04:53.752 passed 00:04:53.752 00:04:53.752 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.752 suites 1 1 n/a 0 0 00:04:53.752 tests 1 1 1 0 0 00:04:53.752 asserts 15 15 15 0 n/a 00:04:53.752 00:04:53.752 Elapsed time = 0.056 seconds 00:04:53.752 00:04:53.752 real 0m0.260s 00:04:53.752 user 0m0.089s 00:04:53.752 sys 0m0.068s 00:04:53.752 19:05:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.752 19:05:31 -- common/autotest_common.sh@10 -- # set +x 00:04:53.752 ************************************ 00:04:53.752 END TEST env_mem_callbacks 00:04:53.752 ************************************ 00:04:53.752 00:04:53.752 real 0m7.924s 00:04:53.752 user 0m6.412s 00:04:53.752 sys 0m1.130s 00:04:53.752 19:05:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.752 19:05:31 -- common/autotest_common.sh@10 -- # set +x 00:04:53.752 ************************************ 00:04:53.752 END TEST env 00:04:53.752 ************************************ 00:04:53.752 19:05:31 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:53.752 19:05:31 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:53.752 19:05:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:53.752 19:05:31 -- common/autotest_common.sh@10 -- # set +x 00:04:53.752 ************************************ 00:04:53.752 START TEST rpc 00:04:53.752 ************************************ 00:04:53.752 19:05:31 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:54.011 * Looking for test storage... 00:04:54.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:54.011 19:05:31 -- rpc/rpc.sh@65 -- # spdk_pid=56936 00:04:54.011 19:05:31 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.011 19:05:31 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:54.011 19:05:31 -- rpc/rpc.sh@67 -- # waitforlisten 56936 00:04:54.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.011 19:05:31 -- common/autotest_common.sh@817 -- # '[' -z 56936 ']' 00:04:54.011 19:05:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.011 19:05:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.011 19:05:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.011 19:05:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.011 19:05:31 -- common/autotest_common.sh@10 -- # set +x 00:04:54.011 [2024-02-14 19:05:31.304855] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:54.011 [2024-02-14 19:05:31.305303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56936 ] 00:04:54.270 [2024-02-14 19:05:31.472420] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.270 [2024-02-14 19:05:31.629152] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.270 [2024-02-14 19:05:31.629387] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:54.270 [2024-02-14 19:05:31.629412] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56936' to capture a snapshot of events at runtime. 00:04:54.270 [2024-02-14 19:05:31.629429] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56936 for offline analysis/debug. 00:04:54.270 [2024-02-14 19:05:31.629459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.698 19:05:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:55.698 19:05:33 -- common/autotest_common.sh@850 -- # return 0 00:04:55.698 19:05:33 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:55.698 19:05:33 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:55.698 19:05:33 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:55.698 19:05:33 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:55.698 19:05:33 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:55.698 19:05:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:55.698 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:55.698 ************************************ 00:04:55.698 START TEST rpc_integrity 00:04:55.698 ************************************ 00:04:55.698 19:05:33 -- common/autotest_common.sh@1102 -- # rpc_integrity 00:04:55.698 19:05:33 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:55.698 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.698 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:55.698 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.698 19:05:33 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:55.698 19:05:33 -- rpc/rpc.sh@13 -- # jq length 00:04:55.957 19:05:33 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:55.957 19:05:33 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:55.957 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.958 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:55.958 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.958 19:05:33 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:55.958 19:05:33 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:55.958 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.958 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:55.958 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.958 19:05:33 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:55.958 { 00:04:55.958 "name": "Malloc0", 00:04:55.958 "aliases": [ 00:04:55.958 "29b02312-3ff7-4fda-abf0-bdd662f2dd10" 00:04:55.958 ], 00:04:55.958 "product_name": "Malloc disk", 00:04:55.958 "block_size": 512, 00:04:55.958 "num_blocks": 16384, 00:04:55.958 "uuid": "29b02312-3ff7-4fda-abf0-bdd662f2dd10", 00:04:55.958 "assigned_rate_limits": { 00:04:55.958 "rw_ios_per_sec": 0, 00:04:55.958 "rw_mbytes_per_sec": 0, 00:04:55.958 "r_mbytes_per_sec": 0, 00:04:55.958 "w_mbytes_per_sec": 0 00:04:55.958 }, 00:04:55.958 "claimed": false, 00:04:55.958 "zoned": false, 00:04:55.958 "supported_io_types": { 00:04:55.958 "read": true, 00:04:55.958 "write": true, 00:04:55.958 "unmap": true, 00:04:55.958 "write_zeroes": true, 00:04:55.958 "flush": true, 00:04:55.958 "reset": true, 00:04:55.958 "compare": false, 00:04:55.958 "compare_and_write": false, 00:04:55.958 "abort": true, 00:04:55.958 "nvme_admin": false, 00:04:55.958 "nvme_io": false 00:04:55.958 }, 00:04:55.958 "memory_domains": [ 00:04:55.958 { 00:04:55.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.958 "dma_device_type": 2 00:04:55.958 } 00:04:55.958 ], 00:04:55.958 "driver_specific": {} 00:04:55.958 } 00:04:55.958 ]' 00:04:55.958 19:05:33 -- rpc/rpc.sh@17 -- # jq length 00:04:55.958 19:05:33 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:55.958 19:05:33 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:55.958 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.958 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:55.958 [2024-02-14 19:05:33.212425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:55.958 [2024-02-14 19:05:33.212546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:55.958 [2024-02-14 19:05:33.212585] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:55.958 [2024-02-14 19:05:33.212603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:55.958 [2024-02-14 19:05:33.215421] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:55.958 [2024-02-14 19:05:33.215482] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:55.958 Passthru0 00:04:55.958 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.958 19:05:33 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:55.958 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.958 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:55.958 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.958 19:05:33 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:55.958 { 00:04:55.958 "name": "Malloc0", 00:04:55.958 "aliases": [ 00:04:55.958 "29b02312-3ff7-4fda-abf0-bdd662f2dd10" 00:04:55.958 ], 00:04:55.958 "product_name": "Malloc disk", 00:04:55.958 "block_size": 512, 00:04:55.958 "num_blocks": 16384, 00:04:55.958 "uuid": "29b02312-3ff7-4fda-abf0-bdd662f2dd10", 00:04:55.958 "assigned_rate_limits": { 00:04:55.958 "rw_ios_per_sec": 0, 00:04:55.958 "rw_mbytes_per_sec": 0, 00:04:55.958 "r_mbytes_per_sec": 0, 00:04:55.958 "w_mbytes_per_sec": 0 00:04:55.958 }, 00:04:55.958 "claimed": true, 00:04:55.958 "claim_type": "exclusive_write", 00:04:55.958 "zoned": false, 00:04:55.958 "supported_io_types": { 00:04:55.958 "read": true, 00:04:55.958 "write": true, 00:04:55.958 "unmap": true, 00:04:55.958 "write_zeroes": true, 00:04:55.958 "flush": true, 00:04:55.958 "reset": true, 00:04:55.958 "compare": false, 00:04:55.958 "compare_and_write": false, 00:04:55.958 "abort": true, 00:04:55.958 "nvme_admin": false, 00:04:55.958 "nvme_io": false 00:04:55.958 }, 00:04:55.958 "memory_domains": [ 00:04:55.958 { 00:04:55.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.958 "dma_device_type": 2 00:04:55.958 } 00:04:55.958 ], 00:04:55.958 "driver_specific": {} 00:04:55.958 }, 00:04:55.958 { 00:04:55.958 "name": "Passthru0", 00:04:55.958 "aliases": [ 00:04:55.958 "7538bdd6-2ecc-576c-a1f0-6b1a79a4dbcc" 00:04:55.958 ], 00:04:55.958 "product_name": "passthru", 00:04:55.958 "block_size": 512, 00:04:55.958 "num_blocks": 16384, 00:04:55.958 "uuid": "7538bdd6-2ecc-576c-a1f0-6b1a79a4dbcc", 00:04:55.958 "assigned_rate_limits": { 00:04:55.958 "rw_ios_per_sec": 0, 00:04:55.958 "rw_mbytes_per_sec": 0, 00:04:55.958 "r_mbytes_per_sec": 0, 00:04:55.958 "w_mbytes_per_sec": 0 00:04:55.958 }, 00:04:55.958 "claimed": false, 00:04:55.958 "zoned": false, 00:04:55.958 "supported_io_types": { 00:04:55.958 "read": true, 00:04:55.958 "write": true, 00:04:55.958 "unmap": true, 00:04:55.958 "write_zeroes": true, 00:04:55.958 "flush": true, 00:04:55.958 "reset": true, 00:04:55.958 "compare": false, 00:04:55.958 "compare_and_write": false, 00:04:55.958 "abort": true, 00:04:55.958 "nvme_admin": false, 00:04:55.958 "nvme_io": false 00:04:55.958 }, 00:04:55.958 "memory_domains": [ 00:04:55.958 { 00:04:55.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.958 "dma_device_type": 2 00:04:55.958 } 00:04:55.958 ], 00:04:55.958 "driver_specific": { 00:04:55.958 "passthru": { 00:04:55.958 "name": "Passthru0", 00:04:55.958 "base_bdev_name": "Malloc0" 00:04:55.958 } 00:04:55.958 } 00:04:55.958 } 00:04:55.958 ]' 00:04:55.958 19:05:33 -- rpc/rpc.sh@21 -- # jq length 00:04:55.958 19:05:33 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:55.958 19:05:33 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:55.958 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.958 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:55.958 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.958 19:05:33 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:55.958 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.958 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:55.958 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.958 19:05:33 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:55.958 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.958 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:55.958 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.958 19:05:33 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:55.958 19:05:33 -- rpc/rpc.sh@26 -- # jq length 00:04:56.218 ************************************ 00:04:56.218 END TEST rpc_integrity 00:04:56.218 ************************************ 00:04:56.218 19:05:33 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.218 00:04:56.218 real 0m0.350s 00:04:56.218 user 0m0.220s 00:04:56.218 sys 0m0.038s 00:04:56.218 19:05:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.218 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.218 19:05:33 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:56.218 19:05:33 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:56.218 19:05:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:56.218 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.218 ************************************ 00:04:56.218 START TEST rpc_plugins 00:04:56.218 ************************************ 00:04:56.218 19:05:33 -- common/autotest_common.sh@1102 -- # rpc_plugins 00:04:56.218 19:05:33 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:56.218 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.218 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.218 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.218 19:05:33 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:56.218 19:05:33 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:56.218 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.218 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.218 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.218 19:05:33 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:56.218 { 00:04:56.218 "name": "Malloc1", 00:04:56.218 "aliases": [ 00:04:56.218 "413aa021-251b-45ba-a990-d07f9b3c57f0" 00:04:56.218 ], 00:04:56.218 "product_name": "Malloc disk", 00:04:56.218 "block_size": 4096, 00:04:56.218 "num_blocks": 256, 00:04:56.218 "uuid": "413aa021-251b-45ba-a990-d07f9b3c57f0", 00:04:56.218 "assigned_rate_limits": { 00:04:56.218 "rw_ios_per_sec": 0, 00:04:56.218 "rw_mbytes_per_sec": 0, 00:04:56.218 "r_mbytes_per_sec": 0, 00:04:56.218 "w_mbytes_per_sec": 0 00:04:56.218 }, 00:04:56.218 "claimed": false, 00:04:56.218 "zoned": false, 00:04:56.218 "supported_io_types": { 00:04:56.218 "read": true, 00:04:56.218 "write": true, 00:04:56.218 "unmap": true, 00:04:56.218 "write_zeroes": true, 00:04:56.218 "flush": true, 00:04:56.218 "reset": true, 00:04:56.218 "compare": false, 00:04:56.218 "compare_and_write": false, 00:04:56.218 "abort": true, 00:04:56.218 "nvme_admin": false, 00:04:56.218 "nvme_io": false 00:04:56.218 }, 00:04:56.218 "memory_domains": [ 00:04:56.218 { 00:04:56.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.218 "dma_device_type": 2 00:04:56.218 } 00:04:56.218 ], 00:04:56.218 "driver_specific": {} 00:04:56.218 } 00:04:56.218 ]' 00:04:56.218 19:05:33 -- rpc/rpc.sh@32 -- # jq length 00:04:56.218 19:05:33 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:56.218 19:05:33 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:56.218 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.218 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.218 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.218 19:05:33 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:56.218 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.218 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.218 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.218 19:05:33 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:56.218 19:05:33 -- rpc/rpc.sh@36 -- # jq length 00:04:56.218 ************************************ 00:04:56.218 END TEST rpc_plugins 00:04:56.218 ************************************ 00:04:56.218 19:05:33 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:56.218 00:04:56.218 real 0m0.161s 00:04:56.218 user 0m0.107s 00:04:56.218 sys 0m0.016s 00:04:56.218 19:05:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.218 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.477 19:05:33 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:56.477 19:05:33 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:56.477 19:05:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:56.477 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.477 ************************************ 00:04:56.477 START TEST rpc_trace_cmd_test 00:04:56.477 ************************************ 00:04:56.477 19:05:33 -- common/autotest_common.sh@1102 -- # rpc_trace_cmd_test 00:04:56.477 19:05:33 -- rpc/rpc.sh@40 -- # local info 00:04:56.477 19:05:33 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:56.477 19:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.477 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.477 19:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.477 19:05:33 -- rpc/rpc.sh@42 -- # info='{ 00:04:56.477 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56936", 00:04:56.477 "tpoint_group_mask": "0x8", 00:04:56.477 "iscsi_conn": { 00:04:56.477 "mask": "0x2", 00:04:56.477 "tpoint_mask": "0x0" 00:04:56.477 }, 00:04:56.477 "scsi": { 00:04:56.477 "mask": "0x4", 00:04:56.477 "tpoint_mask": "0x0" 00:04:56.477 }, 00:04:56.477 "bdev": { 00:04:56.477 "mask": "0x8", 00:04:56.477 "tpoint_mask": "0xffffffffffffffff" 00:04:56.477 }, 00:04:56.477 "nvmf_rdma": { 00:04:56.477 "mask": "0x10", 00:04:56.477 "tpoint_mask": "0x0" 00:04:56.477 }, 00:04:56.477 "nvmf_tcp": { 00:04:56.477 "mask": "0x20", 00:04:56.477 "tpoint_mask": "0x0" 00:04:56.477 }, 00:04:56.477 "ftl": { 00:04:56.477 "mask": "0x40", 00:04:56.477 "tpoint_mask": "0x0" 00:04:56.477 }, 00:04:56.478 "blobfs": { 00:04:56.478 "mask": "0x80", 00:04:56.478 "tpoint_mask": "0x0" 00:04:56.478 }, 00:04:56.478 "dsa": { 00:04:56.478 "mask": "0x200", 00:04:56.478 "tpoint_mask": "0x0" 00:04:56.478 }, 00:04:56.478 "thread": { 00:04:56.478 "mask": "0x400", 00:04:56.478 "tpoint_mask": "0x0" 00:04:56.478 }, 00:04:56.478 "nvme_pcie": { 00:04:56.478 "mask": "0x800", 00:04:56.478 "tpoint_mask": "0x0" 00:04:56.478 }, 00:04:56.478 "iaa": { 00:04:56.478 "mask": "0x1000", 00:04:56.478 "tpoint_mask": "0x0" 00:04:56.478 }, 00:04:56.478 "nvme_tcp": { 00:04:56.478 "mask": "0x2000", 00:04:56.478 "tpoint_mask": "0x0" 00:04:56.478 }, 00:04:56.478 "bdev_nvme": { 00:04:56.478 "mask": "0x4000", 00:04:56.478 "tpoint_mask": "0x0" 00:04:56.478 } 00:04:56.478 }' 00:04:56.478 19:05:33 -- rpc/rpc.sh@43 -- # jq length 00:04:56.478 19:05:33 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:56.478 19:05:33 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:56.478 19:05:33 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:56.478 19:05:33 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:56.478 19:05:33 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:56.478 19:05:33 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:56.737 19:05:33 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:56.737 19:05:33 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:56.737 ************************************ 00:04:56.737 END TEST rpc_trace_cmd_test 00:04:56.737 ************************************ 00:04:56.737 19:05:33 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:56.737 00:04:56.737 real 0m0.285s 00:04:56.737 user 0m0.241s 00:04:56.737 sys 0m0.031s 00:04:56.737 19:05:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.737 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.737 19:05:33 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:56.737 19:05:33 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:56.737 19:05:33 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:56.737 19:05:33 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:56.737 19:05:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:56.737 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.737 ************************************ 00:04:56.737 START TEST rpc_daemon_integrity 00:04:56.737 ************************************ 00:04:56.737 19:05:34 -- common/autotest_common.sh@1102 -- # rpc_integrity 00:04:56.737 19:05:34 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.737 19:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.737 19:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:56.737 19:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.737 19:05:34 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.737 19:05:34 -- rpc/rpc.sh@13 -- # jq length 00:04:56.737 19:05:34 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.737 19:05:34 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.737 19:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.737 19:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:56.737 19:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.737 19:05:34 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:56.737 19:05:34 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.737 19:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.737 19:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:56.737 19:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.737 19:05:34 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.737 { 00:04:56.737 "name": "Malloc2", 00:04:56.737 "aliases": [ 00:04:56.737 "2f07bffc-57e2-4e19-a7cd-61174c536404" 00:04:56.737 ], 00:04:56.737 "product_name": "Malloc disk", 00:04:56.737 "block_size": 512, 00:04:56.737 "num_blocks": 16384, 00:04:56.737 "uuid": "2f07bffc-57e2-4e19-a7cd-61174c536404", 00:04:56.737 "assigned_rate_limits": { 00:04:56.737 "rw_ios_per_sec": 0, 00:04:56.737 "rw_mbytes_per_sec": 0, 00:04:56.737 "r_mbytes_per_sec": 0, 00:04:56.737 "w_mbytes_per_sec": 0 00:04:56.737 }, 00:04:56.737 "claimed": false, 00:04:56.737 "zoned": false, 00:04:56.737 "supported_io_types": { 00:04:56.737 "read": true, 00:04:56.737 "write": true, 00:04:56.737 "unmap": true, 00:04:56.737 "write_zeroes": true, 00:04:56.737 "flush": true, 00:04:56.737 "reset": true, 00:04:56.737 "compare": false, 00:04:56.737 "compare_and_write": false, 00:04:56.737 "abort": true, 00:04:56.737 "nvme_admin": false, 00:04:56.737 "nvme_io": false 00:04:56.737 }, 00:04:56.737 "memory_domains": [ 00:04:56.737 { 00:04:56.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.737 "dma_device_type": 2 00:04:56.737 } 00:04:56.737 ], 00:04:56.737 "driver_specific": {} 00:04:56.737 } 00:04:56.737 ]' 00:04:56.737 19:05:34 -- rpc/rpc.sh@17 -- # jq length 00:04:56.997 19:05:34 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.997 19:05:34 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:56.997 19:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.997 19:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:56.997 [2024-02-14 19:05:34.171769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:56.997 [2024-02-14 19:05:34.171865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.997 [2024-02-14 19:05:34.171940] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:56.997 [2024-02-14 19:05:34.171955] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.997 [2024-02-14 19:05:34.174695] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.997 [2024-02-14 19:05:34.174743] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.997 Passthru0 00:04:56.997 19:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.997 19:05:34 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.997 19:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.997 19:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:56.997 19:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.997 19:05:34 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.997 { 00:04:56.997 "name": "Malloc2", 00:04:56.997 "aliases": [ 00:04:56.997 "2f07bffc-57e2-4e19-a7cd-61174c536404" 00:04:56.997 ], 00:04:56.997 "product_name": "Malloc disk", 00:04:56.997 "block_size": 512, 00:04:56.997 "num_blocks": 16384, 00:04:56.997 "uuid": "2f07bffc-57e2-4e19-a7cd-61174c536404", 00:04:56.997 "assigned_rate_limits": { 00:04:56.997 "rw_ios_per_sec": 0, 00:04:56.997 "rw_mbytes_per_sec": 0, 00:04:56.997 "r_mbytes_per_sec": 0, 00:04:56.997 "w_mbytes_per_sec": 0 00:04:56.997 }, 00:04:56.997 "claimed": true, 00:04:56.997 "claim_type": "exclusive_write", 00:04:56.997 "zoned": false, 00:04:56.997 "supported_io_types": { 00:04:56.997 "read": true, 00:04:56.997 "write": true, 00:04:56.997 "unmap": true, 00:04:56.997 "write_zeroes": true, 00:04:56.997 "flush": true, 00:04:56.997 "reset": true, 00:04:56.997 "compare": false, 00:04:56.997 "compare_and_write": false, 00:04:56.997 "abort": true, 00:04:56.997 "nvme_admin": false, 00:04:56.997 "nvme_io": false 00:04:56.997 }, 00:04:56.997 "memory_domains": [ 00:04:56.997 { 00:04:56.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.998 "dma_device_type": 2 00:04:56.998 } 00:04:56.998 ], 00:04:56.998 "driver_specific": {} 00:04:56.998 }, 00:04:56.998 { 00:04:56.998 "name": "Passthru0", 00:04:56.998 "aliases": [ 00:04:56.998 "df1a7a99-2253-5657-b732-c422d2a6e21b" 00:04:56.998 ], 00:04:56.998 "product_name": "passthru", 00:04:56.998 "block_size": 512, 00:04:56.998 "num_blocks": 16384, 00:04:56.998 "uuid": "df1a7a99-2253-5657-b732-c422d2a6e21b", 00:04:56.998 "assigned_rate_limits": { 00:04:56.998 "rw_ios_per_sec": 0, 00:04:56.998 "rw_mbytes_per_sec": 0, 00:04:56.998 "r_mbytes_per_sec": 0, 00:04:56.998 "w_mbytes_per_sec": 0 00:04:56.998 }, 00:04:56.998 "claimed": false, 00:04:56.998 "zoned": false, 00:04:56.998 "supported_io_types": { 00:04:56.998 "read": true, 00:04:56.998 "write": true, 00:04:56.998 "unmap": true, 00:04:56.998 "write_zeroes": true, 00:04:56.998 "flush": true, 00:04:56.998 "reset": true, 00:04:56.998 "compare": false, 00:04:56.998 "compare_and_write": false, 00:04:56.998 "abort": true, 00:04:56.998 "nvme_admin": false, 00:04:56.998 "nvme_io": false 00:04:56.998 }, 00:04:56.998 "memory_domains": [ 00:04:56.998 { 00:04:56.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.998 "dma_device_type": 2 00:04:56.998 } 00:04:56.998 ], 00:04:56.998 "driver_specific": { 00:04:56.998 "passthru": { 00:04:56.998 "name": "Passthru0", 00:04:56.998 "base_bdev_name": "Malloc2" 00:04:56.998 } 00:04:56.998 } 00:04:56.998 } 00:04:56.998 ]' 00:04:56.998 19:05:34 -- rpc/rpc.sh@21 -- # jq length 00:04:56.998 19:05:34 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.998 19:05:34 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.998 19:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.998 19:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:56.998 19:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.998 19:05:34 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:56.998 19:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.998 19:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:56.998 19:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.998 19:05:34 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.998 19:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.998 19:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:56.998 19:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.998 19:05:34 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.998 19:05:34 -- rpc/rpc.sh@26 -- # jq length 00:04:56.998 ************************************ 00:04:56.998 END TEST rpc_daemon_integrity 00:04:56.998 ************************************ 00:04:56.998 19:05:34 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.998 00:04:56.998 real 0m0.348s 00:04:56.998 user 0m0.225s 00:04:56.998 sys 0m0.032s 00:04:56.998 19:05:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.998 19:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:56.998 19:05:34 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:56.998 19:05:34 -- rpc/rpc.sh@84 -- # killprocess 56936 00:04:56.998 19:05:34 -- common/autotest_common.sh@924 -- # '[' -z 56936 ']' 00:04:56.998 19:05:34 -- common/autotest_common.sh@928 -- # kill -0 56936 00:04:56.998 19:05:34 -- common/autotest_common.sh@929 -- # uname 00:04:56.998 19:05:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:56.998 19:05:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 56936 00:04:57.259 killing process with pid 56936 00:04:57.259 19:05:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:57.259 19:05:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:57.259 19:05:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 56936' 00:04:57.259 19:05:34 -- common/autotest_common.sh@943 -- # kill 56936 00:04:57.259 19:05:34 -- common/autotest_common.sh@948 -- # wait 56936 00:04:59.164 00:04:59.164 real 0m5.270s 00:04:59.164 user 0m6.401s 00:04:59.164 sys 0m0.701s 00:04:59.164 ************************************ 00:04:59.164 END TEST rpc 00:04:59.164 ************************************ 00:04:59.164 19:05:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.164 19:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.164 19:05:36 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:59.164 19:05:36 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:59.164 19:05:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:59.164 19:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.164 ************************************ 00:04:59.164 START TEST rpc_client 00:04:59.164 ************************************ 00:04:59.164 19:05:36 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:59.164 * Looking for test storage... 00:04:59.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:59.164 19:05:36 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:59.164 OK 00:04:59.164 19:05:36 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:59.164 00:04:59.164 real 0m0.132s 00:04:59.164 user 0m0.060s 00:04:59.164 sys 0m0.078s 00:04:59.164 19:05:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.164 19:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.164 ************************************ 00:04:59.164 END TEST rpc_client 00:04:59.164 ************************************ 00:04:59.424 19:05:36 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:59.424 19:05:36 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:59.424 19:05:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:59.424 19:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.424 ************************************ 00:04:59.424 START TEST json_config 00:04:59.424 ************************************ 00:04:59.424 19:05:36 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:59.424 19:05:36 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.424 19:05:36 -- nvmf/common.sh@7 -- # uname -s 00:04:59.424 19:05:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.424 19:05:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.424 19:05:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.424 19:05:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.424 19:05:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.424 19:05:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.424 19:05:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.424 19:05:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.424 19:05:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.424 19:05:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.424 19:05:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1a5ec1fb-7411-49ca-a93a-15d5d1607752 00:04:59.424 19:05:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=1a5ec1fb-7411-49ca-a93a-15d5d1607752 00:04:59.424 19:05:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.424 19:05:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.424 19:05:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.424 19:05:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.424 19:05:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.424 19:05:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.424 19:05:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.424 19:05:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.424 19:05:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.424 19:05:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.424 19:05:36 -- paths/export.sh@5 -- # export PATH 00:04:59.424 19:05:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.424 19:05:36 -- nvmf/common.sh@46 -- # : 0 00:04:59.424 19:05:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:59.424 19:05:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:59.424 19:05:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:59.424 19:05:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.424 19:05:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.424 19:05:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:59.424 19:05:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:59.424 19:05:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:59.424 19:05:36 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:59.424 19:05:36 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:59.424 19:05:36 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:59.424 19:05:36 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:59.424 19:05:36 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:59.424 WARNING: No tests are enabled so not running JSON configuration tests 00:04:59.424 19:05:36 -- json_config/json_config.sh@27 -- # exit 0 00:04:59.424 00:04:59.424 real 0m0.087s 00:04:59.424 user 0m0.035s 00:04:59.424 sys 0m0.044s 00:04:59.425 19:05:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.425 19:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.425 ************************************ 00:04:59.425 END TEST json_config 00:04:59.425 ************************************ 00:04:59.425 19:05:36 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:59.425 19:05:36 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:59.425 19:05:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:59.425 19:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.425 ************************************ 00:04:59.425 START TEST json_config_extra_key 00:04:59.425 ************************************ 00:04:59.425 19:05:36 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.425 19:05:36 -- nvmf/common.sh@7 -- # uname -s 00:04:59.425 19:05:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.425 19:05:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.425 19:05:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.425 19:05:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.425 19:05:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.425 19:05:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.425 19:05:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.425 19:05:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.425 19:05:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.425 19:05:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.425 19:05:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1a5ec1fb-7411-49ca-a93a-15d5d1607752 00:04:59.425 19:05:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=1a5ec1fb-7411-49ca-a93a-15d5d1607752 00:04:59.425 19:05:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.425 19:05:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.425 19:05:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.425 19:05:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.425 19:05:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.425 19:05:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.425 19:05:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.425 19:05:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.425 19:05:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.425 19:05:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.425 19:05:36 -- paths/export.sh@5 -- # export PATH 00:04:59.425 19:05:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.425 19:05:36 -- nvmf/common.sh@46 -- # : 0 00:04:59.425 19:05:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:59.425 19:05:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:59.425 19:05:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:59.425 19:05:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.425 19:05:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.425 19:05:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:59.425 19:05:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:59.425 19:05:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:59.425 INFO: launching applications... 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=57236 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:59.425 Waiting for target to run... 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:59.425 19:05:36 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 57236 /var/tmp/spdk_tgt.sock 00:04:59.425 19:05:36 -- common/autotest_common.sh@817 -- # '[' -z 57236 ']' 00:04:59.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.425 19:05:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.425 19:05:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:59.425 19:05:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.425 19:05:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:59.425 19:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.684 [2024-02-14 19:05:36.956336] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:59.684 [2024-02-14 19:05:36.956536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57236 ] 00:04:59.943 [2024-02-14 19:05:37.299059] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.202 [2024-02-14 19:05:37.450657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:00.202 [2024-02-14 19:05:37.450902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.202 [2024-02-14 19:05:37.450949] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:01.579 00:05:01.579 INFO: shutting down applications... 00:05:01.579 19:05:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:01.579 19:05:38 -- common/autotest_common.sh@850 -- # return 0 00:05:01.579 19:05:38 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:01.579 19:05:38 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:01.579 19:05:38 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:01.579 19:05:38 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:01.579 19:05:38 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:01.579 19:05:38 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 57236 ]] 00:05:01.579 19:05:38 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 57236 00:05:01.579 [2024-02-14 19:05:38.640553] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:01.579 19:05:38 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:01.579 19:05:38 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:01.579 19:05:38 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57236 00:05:01.579 19:05:38 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:01.838 19:05:39 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:01.838 19:05:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:01.838 19:05:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57236 00:05:01.838 19:05:39 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:02.464 19:05:39 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:02.464 19:05:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:02.464 19:05:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57236 00:05:02.464 19:05:39 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:03.032 19:05:40 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:03.032 19:05:40 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:03.032 19:05:40 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57236 00:05:03.032 19:05:40 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:03.290 19:05:40 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:03.290 19:05:40 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:03.290 19:05:40 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57236 00:05:03.290 19:05:40 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:03.857 19:05:41 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:03.857 19:05:41 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:03.857 19:05:41 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57236 00:05:03.857 SPDK target shutdown done 00:05:03.857 Success 00:05:03.857 19:05:41 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:03.857 19:05:41 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:03.857 19:05:41 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:03.857 19:05:41 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:03.857 19:05:41 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:03.857 ************************************ 00:05:03.857 END TEST json_config_extra_key 00:05:03.857 ************************************ 00:05:03.857 00:05:03.857 real 0m4.408s 00:05:03.857 user 0m4.265s 00:05:03.857 sys 0m0.503s 00:05:03.857 19:05:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.857 19:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 19:05:41 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.857 19:05:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:03.857 19:05:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:03.857 19:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 ************************************ 00:05:03.857 START TEST alias_rpc 00:05:03.857 ************************************ 00:05:03.857 19:05:41 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:04.116 * Looking for test storage... 00:05:04.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:04.116 19:05:41 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:04.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.116 19:05:41 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57334 00:05:04.116 19:05:41 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57334 00:05:04.116 19:05:41 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.116 19:05:41 -- common/autotest_common.sh@817 -- # '[' -z 57334 ']' 00:05:04.116 19:05:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.116 19:05:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:04.116 19:05:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.116 19:05:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:04.116 19:05:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.116 [2024-02-14 19:05:41.417376] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:04.116 [2024-02-14 19:05:41.417566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57334 ] 00:05:04.375 [2024-02-14 19:05:41.590558] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.634 [2024-02-14 19:05:41.797374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:04.634 [2024-02-14 19:05:41.797662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.010 19:05:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:06.010 19:05:43 -- common/autotest_common.sh@850 -- # return 0 00:05:06.010 19:05:43 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:06.010 19:05:43 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57334 00:05:06.010 19:05:43 -- common/autotest_common.sh@924 -- # '[' -z 57334 ']' 00:05:06.010 19:05:43 -- common/autotest_common.sh@928 -- # kill -0 57334 00:05:06.010 19:05:43 -- common/autotest_common.sh@929 -- # uname 00:05:06.010 19:05:43 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:06.010 19:05:43 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 57334 00:05:06.010 killing process with pid 57334 00:05:06.010 19:05:43 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:06.010 19:05:43 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:06.010 19:05:43 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 57334' 00:05:06.010 19:05:43 -- common/autotest_common.sh@943 -- # kill 57334 00:05:06.010 19:05:43 -- common/autotest_common.sh@948 -- # wait 57334 00:05:07.913 ************************************ 00:05:07.913 END TEST alias_rpc 00:05:07.913 ************************************ 00:05:07.913 00:05:07.913 real 0m3.977s 00:05:07.913 user 0m4.367s 00:05:07.913 sys 0m0.468s 00:05:07.913 19:05:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.913 19:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:07.913 19:05:45 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:07.913 19:05:45 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:07.913 19:05:45 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:07.913 19:05:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:07.913 19:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:07.913 ************************************ 00:05:07.913 START TEST spdkcli_tcp 00:05:07.913 ************************************ 00:05:07.913 19:05:45 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:07.913 * Looking for test storage... 00:05:08.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:08.171 19:05:45 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:08.171 19:05:45 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:08.171 19:05:45 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:08.171 19:05:45 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:08.171 19:05:45 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:08.171 19:05:45 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:08.171 19:05:45 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:08.171 19:05:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:08.171 19:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.171 19:05:45 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57434 00:05:08.171 19:05:45 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:08.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.171 19:05:45 -- spdkcli/tcp.sh@27 -- # waitforlisten 57434 00:05:08.171 19:05:45 -- common/autotest_common.sh@817 -- # '[' -z 57434 ']' 00:05:08.171 19:05:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.171 19:05:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:08.171 19:05:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.171 19:05:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:08.171 19:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.171 [2024-02-14 19:05:45.455939] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:08.172 [2024-02-14 19:05:45.456105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57434 ] 00:05:08.430 [2024-02-14 19:05:45.626738] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.430 [2024-02-14 19:05:45.807618] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:08.430 [2024-02-14 19:05:45.808010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.430 [2024-02-14 19:05:45.808158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.808 19:05:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.808 19:05:47 -- common/autotest_common.sh@850 -- # return 0 00:05:09.808 19:05:47 -- spdkcli/tcp.sh@31 -- # socat_pid=57464 00:05:09.808 19:05:47 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:09.808 19:05:47 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:10.067 [ 00:05:10.067 "bdev_malloc_delete", 00:05:10.067 "bdev_malloc_create", 00:05:10.067 "bdev_null_resize", 00:05:10.067 "bdev_null_delete", 00:05:10.067 "bdev_null_create", 00:05:10.067 "bdev_nvme_cuse_unregister", 00:05:10.067 "bdev_nvme_cuse_register", 00:05:10.067 "bdev_opal_new_user", 00:05:10.067 "bdev_opal_set_lock_state", 00:05:10.067 "bdev_opal_delete", 00:05:10.067 "bdev_opal_get_info", 00:05:10.067 "bdev_opal_create", 00:05:10.067 "bdev_nvme_opal_revert", 00:05:10.067 "bdev_nvme_opal_init", 00:05:10.067 "bdev_nvme_send_cmd", 00:05:10.067 "bdev_nvme_get_path_iostat", 00:05:10.067 "bdev_nvme_get_mdns_discovery_info", 00:05:10.067 "bdev_nvme_stop_mdns_discovery", 00:05:10.067 "bdev_nvme_start_mdns_discovery", 00:05:10.067 "bdev_nvme_set_multipath_policy", 00:05:10.068 "bdev_nvme_set_preferred_path", 00:05:10.068 "bdev_nvme_get_io_paths", 00:05:10.068 "bdev_nvme_remove_error_injection", 00:05:10.068 "bdev_nvme_add_error_injection", 00:05:10.068 "bdev_nvme_get_discovery_info", 00:05:10.068 "bdev_nvme_stop_discovery", 00:05:10.068 "bdev_nvme_start_discovery", 00:05:10.068 "bdev_nvme_get_controller_health_info", 00:05:10.068 "bdev_nvme_disable_controller", 00:05:10.068 "bdev_nvme_enable_controller", 00:05:10.068 "bdev_nvme_reset_controller", 00:05:10.068 "bdev_nvme_get_transport_statistics", 00:05:10.068 "bdev_nvme_apply_firmware", 00:05:10.068 "bdev_nvme_detach_controller", 00:05:10.068 "bdev_nvme_get_controllers", 00:05:10.068 "bdev_nvme_attach_controller", 00:05:10.068 "bdev_nvme_set_hotplug", 00:05:10.068 "bdev_nvme_set_options", 00:05:10.068 "bdev_passthru_delete", 00:05:10.068 "bdev_passthru_create", 00:05:10.068 "bdev_lvol_grow_lvstore", 00:05:10.068 "bdev_lvol_get_lvols", 00:05:10.068 "bdev_lvol_get_lvstores", 00:05:10.068 "bdev_lvol_delete", 00:05:10.068 "bdev_lvol_set_read_only", 00:05:10.068 "bdev_lvol_resize", 00:05:10.068 "bdev_lvol_decouple_parent", 00:05:10.068 "bdev_lvol_inflate", 00:05:10.068 "bdev_lvol_rename", 00:05:10.068 "bdev_lvol_clone_bdev", 00:05:10.068 "bdev_lvol_clone", 00:05:10.068 "bdev_lvol_snapshot", 00:05:10.068 "bdev_lvol_create", 00:05:10.068 "bdev_lvol_delete_lvstore", 00:05:10.068 "bdev_lvol_rename_lvstore", 00:05:10.068 "bdev_lvol_create_lvstore", 00:05:10.068 "bdev_raid_set_options", 00:05:10.068 "bdev_raid_remove_base_bdev", 00:05:10.068 "bdev_raid_add_base_bdev", 00:05:10.068 "bdev_raid_delete", 00:05:10.068 "bdev_raid_create", 00:05:10.068 "bdev_raid_get_bdevs", 00:05:10.068 "bdev_error_inject_error", 00:05:10.068 "bdev_error_delete", 00:05:10.068 "bdev_error_create", 00:05:10.068 "bdev_split_delete", 00:05:10.068 "bdev_split_create", 00:05:10.068 "bdev_delay_delete", 00:05:10.068 "bdev_delay_create", 00:05:10.068 "bdev_delay_update_latency", 00:05:10.068 "bdev_zone_block_delete", 00:05:10.068 "bdev_zone_block_create", 00:05:10.068 "blobfs_create", 00:05:10.068 "blobfs_detect", 00:05:10.068 "blobfs_set_cache_size", 00:05:10.068 "bdev_xnvme_delete", 00:05:10.068 "bdev_xnvme_create", 00:05:10.068 "bdev_aio_delete", 00:05:10.068 "bdev_aio_rescan", 00:05:10.068 "bdev_aio_create", 00:05:10.068 "bdev_ftl_set_property", 00:05:10.068 "bdev_ftl_get_properties", 00:05:10.068 "bdev_ftl_get_stats", 00:05:10.068 "bdev_ftl_unmap", 00:05:10.068 "bdev_ftl_unload", 00:05:10.068 "bdev_ftl_delete", 00:05:10.068 "bdev_ftl_load", 00:05:10.068 "bdev_ftl_create", 00:05:10.068 "bdev_virtio_attach_controller", 00:05:10.068 "bdev_virtio_scsi_get_devices", 00:05:10.068 "bdev_virtio_detach_controller", 00:05:10.068 "bdev_virtio_blk_set_hotplug", 00:05:10.068 "bdev_iscsi_delete", 00:05:10.068 "bdev_iscsi_create", 00:05:10.068 "bdev_iscsi_set_options", 00:05:10.068 "accel_error_inject_error", 00:05:10.068 "ioat_scan_accel_module", 00:05:10.068 "dsa_scan_accel_module", 00:05:10.068 "iaa_scan_accel_module", 00:05:10.068 "iscsi_set_options", 00:05:10.068 "iscsi_get_auth_groups", 00:05:10.068 "iscsi_auth_group_remove_secret", 00:05:10.068 "iscsi_auth_group_add_secret", 00:05:10.068 "iscsi_delete_auth_group", 00:05:10.068 "iscsi_create_auth_group", 00:05:10.068 "iscsi_set_discovery_auth", 00:05:10.068 "iscsi_get_options", 00:05:10.068 "iscsi_target_node_request_logout", 00:05:10.068 "iscsi_target_node_set_redirect", 00:05:10.068 "iscsi_target_node_set_auth", 00:05:10.068 "iscsi_target_node_add_lun", 00:05:10.068 "iscsi_get_connections", 00:05:10.068 "iscsi_portal_group_set_auth", 00:05:10.068 "iscsi_start_portal_group", 00:05:10.068 "iscsi_delete_portal_group", 00:05:10.068 "iscsi_create_portal_group", 00:05:10.068 "iscsi_get_portal_groups", 00:05:10.068 "iscsi_delete_target_node", 00:05:10.068 "iscsi_target_node_remove_pg_ig_maps", 00:05:10.068 "iscsi_target_node_add_pg_ig_maps", 00:05:10.068 "iscsi_create_target_node", 00:05:10.068 "iscsi_get_target_nodes", 00:05:10.068 "iscsi_delete_initiator_group", 00:05:10.068 "iscsi_initiator_group_remove_initiators", 00:05:10.068 "iscsi_initiator_group_add_initiators", 00:05:10.068 "iscsi_create_initiator_group", 00:05:10.068 "iscsi_get_initiator_groups", 00:05:10.068 "nvmf_set_crdt", 00:05:10.068 "nvmf_set_config", 00:05:10.068 "nvmf_set_max_subsystems", 00:05:10.068 "nvmf_subsystem_get_listeners", 00:05:10.068 "nvmf_subsystem_get_qpairs", 00:05:10.068 "nvmf_subsystem_get_controllers", 00:05:10.068 "nvmf_get_stats", 00:05:10.068 "nvmf_get_transports", 00:05:10.068 "nvmf_create_transport", 00:05:10.068 "nvmf_get_targets", 00:05:10.068 "nvmf_delete_target", 00:05:10.068 "nvmf_create_target", 00:05:10.068 "nvmf_subsystem_allow_any_host", 00:05:10.068 "nvmf_subsystem_remove_host", 00:05:10.068 "nvmf_subsystem_add_host", 00:05:10.068 "nvmf_subsystem_remove_ns", 00:05:10.068 "nvmf_subsystem_add_ns", 00:05:10.068 "nvmf_subsystem_listener_set_ana_state", 00:05:10.068 "nvmf_discovery_get_referrals", 00:05:10.068 "nvmf_discovery_remove_referral", 00:05:10.068 "nvmf_discovery_add_referral", 00:05:10.068 "nvmf_subsystem_remove_listener", 00:05:10.068 "nvmf_subsystem_add_listener", 00:05:10.068 "nvmf_delete_subsystem", 00:05:10.068 "nvmf_create_subsystem", 00:05:10.068 "nvmf_get_subsystems", 00:05:10.068 "env_dpdk_get_mem_stats", 00:05:10.068 "nbd_get_disks", 00:05:10.068 "nbd_stop_disk", 00:05:10.068 "nbd_start_disk", 00:05:10.068 "ublk_recover_disk", 00:05:10.068 "ublk_get_disks", 00:05:10.068 "ublk_stop_disk", 00:05:10.068 "ublk_start_disk", 00:05:10.068 "ublk_destroy_target", 00:05:10.068 "ublk_create_target", 00:05:10.068 "virtio_blk_create_transport", 00:05:10.068 "virtio_blk_get_transports", 00:05:10.068 "vhost_controller_set_coalescing", 00:05:10.068 "vhost_get_controllers", 00:05:10.068 "vhost_delete_controller", 00:05:10.068 "vhost_create_blk_controller", 00:05:10.068 "vhost_scsi_controller_remove_target", 00:05:10.068 "vhost_scsi_controller_add_target", 00:05:10.068 "vhost_start_scsi_controller", 00:05:10.068 "vhost_create_scsi_controller", 00:05:10.068 "thread_set_cpumask", 00:05:10.068 "framework_get_scheduler", 00:05:10.068 "framework_set_scheduler", 00:05:10.068 "framework_get_reactors", 00:05:10.068 "thread_get_io_channels", 00:05:10.068 "thread_get_pollers", 00:05:10.069 "thread_get_stats", 00:05:10.069 "framework_monitor_context_switch", 00:05:10.069 "spdk_kill_instance", 00:05:10.069 "log_enable_timestamps", 00:05:10.069 "log_get_flags", 00:05:10.069 "log_clear_flag", 00:05:10.069 "log_set_flag", 00:05:10.069 "log_get_level", 00:05:10.069 "log_set_level", 00:05:10.069 "log_get_print_level", 00:05:10.069 "log_set_print_level", 00:05:10.069 "framework_enable_cpumask_locks", 00:05:10.069 "framework_disable_cpumask_locks", 00:05:10.069 "framework_wait_init", 00:05:10.069 "framework_start_init", 00:05:10.069 "scsi_get_devices", 00:05:10.069 "bdev_get_histogram", 00:05:10.069 "bdev_enable_histogram", 00:05:10.069 "bdev_set_qos_limit", 00:05:10.069 "bdev_set_qd_sampling_period", 00:05:10.069 "bdev_get_bdevs", 00:05:10.069 "bdev_reset_iostat", 00:05:10.069 "bdev_get_iostat", 00:05:10.069 "bdev_examine", 00:05:10.069 "bdev_wait_for_examine", 00:05:10.069 "bdev_set_options", 00:05:10.069 "notify_get_notifications", 00:05:10.069 "notify_get_types", 00:05:10.069 "accel_get_stats", 00:05:10.069 "accel_set_options", 00:05:10.069 "accel_set_driver", 00:05:10.069 "accel_crypto_key_destroy", 00:05:10.069 "accel_crypto_keys_get", 00:05:10.069 "accel_crypto_key_create", 00:05:10.069 "accel_assign_opc", 00:05:10.069 "accel_get_module_info", 00:05:10.069 "accel_get_opc_assignments", 00:05:10.069 "vmd_rescan", 00:05:10.069 "vmd_remove_device", 00:05:10.069 "vmd_enable", 00:05:10.069 "sock_set_default_impl", 00:05:10.069 "sock_impl_set_options", 00:05:10.069 "sock_impl_get_options", 00:05:10.069 "iobuf_get_stats", 00:05:10.069 "iobuf_set_options", 00:05:10.069 "framework_get_pci_devices", 00:05:10.069 "framework_get_config", 00:05:10.069 "framework_get_subsystems", 00:05:10.069 "trace_get_info", 00:05:10.069 "trace_get_tpoint_group_mask", 00:05:10.069 "trace_disable_tpoint_group", 00:05:10.069 "trace_enable_tpoint_group", 00:05:10.069 "trace_clear_tpoint_mask", 00:05:10.069 "trace_set_tpoint_mask", 00:05:10.069 "spdk_get_version", 00:05:10.069 "rpc_get_methods" 00:05:10.069 ] 00:05:10.069 19:05:47 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:10.069 19:05:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:10.069 19:05:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.069 19:05:47 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:10.069 19:05:47 -- spdkcli/tcp.sh@38 -- # killprocess 57434 00:05:10.069 19:05:47 -- common/autotest_common.sh@924 -- # '[' -z 57434 ']' 00:05:10.069 19:05:47 -- common/autotest_common.sh@928 -- # kill -0 57434 00:05:10.069 19:05:47 -- common/autotest_common.sh@929 -- # uname 00:05:10.069 19:05:47 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:10.069 19:05:47 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 57434 00:05:10.069 killing process with pid 57434 00:05:10.069 19:05:47 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:10.069 19:05:47 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:10.069 19:05:47 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 57434' 00:05:10.069 19:05:47 -- common/autotest_common.sh@943 -- # kill 57434 00:05:10.069 19:05:47 -- common/autotest_common.sh@948 -- # wait 57434 00:05:12.014 ************************************ 00:05:12.014 END TEST spdkcli_tcp 00:05:12.014 ************************************ 00:05:12.014 00:05:12.014 real 0m4.153s 00:05:12.014 user 0m7.763s 00:05:12.014 sys 0m0.512s 00:05:12.014 19:05:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.014 19:05:49 -- common/autotest_common.sh@10 -- # set +x 00:05:12.272 19:05:49 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.272 19:05:49 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:12.272 19:05:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:12.272 19:05:49 -- common/autotest_common.sh@10 -- # set +x 00:05:12.272 ************************************ 00:05:12.272 START TEST dpdk_mem_utility 00:05:12.272 ************************************ 00:05:12.272 19:05:49 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.272 * Looking for test storage... 00:05:12.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:12.272 19:05:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:12.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.272 19:05:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57549 00:05:12.272 19:05:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.272 19:05:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57549 00:05:12.272 19:05:49 -- common/autotest_common.sh@817 -- # '[' -z 57549 ']' 00:05:12.272 19:05:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.272 19:05:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:12.272 19:05:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.272 19:05:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:12.272 19:05:49 -- common/autotest_common.sh@10 -- # set +x 00:05:12.272 [2024-02-14 19:05:49.642179] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:12.272 [2024-02-14 19:05:49.642572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57549 ] 00:05:12.531 [2024-02-14 19:05:49.812213] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.789 [2024-02-14 19:05:49.990451] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.789 [2024-02-14 19:05:49.990711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.167 19:05:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:14.167 19:05:51 -- common/autotest_common.sh@850 -- # return 0 00:05:14.167 19:05:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:14.167 19:05:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:14.167 19:05:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.167 19:05:51 -- common/autotest_common.sh@10 -- # set +x 00:05:14.167 { 00:05:14.167 "filename": "/tmp/spdk_mem_dump.txt" 00:05:14.167 } 00:05:14.167 19:05:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.167 19:05:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:14.167 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:14.167 1 heaps totaling size 820.000000 MiB 00:05:14.167 size: 820.000000 MiB heap id: 0 00:05:14.167 end heaps---------- 00:05:14.167 8 mempools totaling size 598.116089 MiB 00:05:14.167 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:14.167 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:14.167 size: 84.521057 MiB name: bdev_io_57549 00:05:14.167 size: 51.011292 MiB name: evtpool_57549 00:05:14.167 size: 50.003479 MiB name: msgpool_57549 00:05:14.167 size: 21.763794 MiB name: PDU_Pool 00:05:14.167 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:14.167 size: 0.026123 MiB name: Session_Pool 00:05:14.167 end mempools------- 00:05:14.167 6 memzones totaling size 4.142822 MiB 00:05:14.167 size: 1.000366 MiB name: RG_ring_0_57549 00:05:14.167 size: 1.000366 MiB name: RG_ring_1_57549 00:05:14.167 size: 1.000366 MiB name: RG_ring_4_57549 00:05:14.167 size: 1.000366 MiB name: RG_ring_5_57549 00:05:14.167 size: 0.125366 MiB name: RG_ring_2_57549 00:05:14.167 size: 0.015991 MiB name: RG_ring_3_57549 00:05:14.167 end memzones------- 00:05:14.167 19:05:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:14.167 heap id: 0 total size: 820.000000 MiB number of busy elements: 303 number of free elements: 18 00:05:14.167 list of free elements. size: 18.450806 MiB 00:05:14.167 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:14.167 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:14.167 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:14.167 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:14.167 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:14.167 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:14.167 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:14.167 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:14.167 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:14.167 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:14.167 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:14.167 element at address: 0x200000200000 with size: 0.829224 MiB 00:05:14.167 element at address: 0x20001b000000 with size: 0.564148 MiB 00:05:14.167 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:14.167 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:14.167 element at address: 0x200013800000 with size: 0.467896 MiB 00:05:14.167 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:14.167 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:14.167 list of standard malloc elements. size: 199.284790 MiB 00:05:14.167 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:14.167 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:14.167 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:14.167 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:14.167 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:14.167 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:14.167 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:14.167 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:14.167 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:14.167 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:14.167 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:14.167 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:14.167 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:14.168 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:14.169 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:14.169 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:14.169 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:14.169 list of memzone associated elements. size: 602.264404 MiB 00:05:14.169 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:14.169 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:14.169 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:14.169 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:14.169 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:14.169 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_57549_0 00:05:14.169 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:14.169 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57549_0 00:05:14.169 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:14.169 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57549_0 00:05:14.169 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:14.169 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:14.169 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:14.169 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:14.169 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:14.169 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57549 00:05:14.169 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:14.169 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57549 00:05:14.169 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:14.169 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57549 00:05:14.169 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:14.169 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:14.169 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:14.169 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:14.170 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:14.170 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:14.170 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:14.170 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:14.170 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:14.170 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57549 00:05:14.170 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:14.170 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57549 00:05:14.170 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:14.170 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57549 00:05:14.170 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:14.170 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57549 00:05:14.170 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:14.170 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57549 00:05:14.170 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:14.170 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:14.170 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:14.170 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:14.170 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:14.170 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:14.170 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:14.170 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57549 00:05:14.170 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:14.170 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:14.170 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:14.170 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:14.170 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:14.170 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57549 00:05:14.170 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:14.170 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:14.170 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:14.170 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57549 00:05:14.170 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:14.170 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57549 00:05:14.170 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:14.170 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:14.170 19:05:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:14.170 19:05:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57549 00:05:14.170 19:05:51 -- common/autotest_common.sh@924 -- # '[' -z 57549 ']' 00:05:14.170 19:05:51 -- common/autotest_common.sh@928 -- # kill -0 57549 00:05:14.170 19:05:51 -- common/autotest_common.sh@929 -- # uname 00:05:14.170 19:05:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:14.170 19:05:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 57549 00:05:14.170 killing process with pid 57549 00:05:14.170 19:05:51 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:14.170 19:05:51 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:14.170 19:05:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 57549' 00:05:14.170 19:05:51 -- common/autotest_common.sh@943 -- # kill 57549 00:05:14.170 19:05:51 -- common/autotest_common.sh@948 -- # wait 57549 00:05:16.070 00:05:16.070 real 0m3.862s 00:05:16.070 user 0m4.171s 00:05:16.070 sys 0m0.448s 00:05:16.070 19:05:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.070 19:05:53 -- common/autotest_common.sh@10 -- # set +x 00:05:16.070 ************************************ 00:05:16.070 END TEST dpdk_mem_utility 00:05:16.070 ************************************ 00:05:16.070 19:05:53 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:16.070 19:05:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:16.070 19:05:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:16.070 19:05:53 -- common/autotest_common.sh@10 -- # set +x 00:05:16.070 ************************************ 00:05:16.070 START TEST event 00:05:16.070 ************************************ 00:05:16.070 19:05:53 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:16.070 * Looking for test storage... 00:05:16.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:16.070 19:05:53 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:16.070 19:05:53 -- bdev/nbd_common.sh@6 -- # set -e 00:05:16.070 19:05:53 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.070 19:05:53 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:05:16.070 19:05:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:16.070 19:05:53 -- common/autotest_common.sh@10 -- # set +x 00:05:16.070 ************************************ 00:05:16.070 START TEST event_perf 00:05:16.070 ************************************ 00:05:16.070 19:05:53 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.327 Running I/O for 1 seconds...[2024-02-14 19:05:53.496851] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:16.327 [2024-02-14 19:05:53.497219] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57650 ] 00:05:16.327 [2024-02-14 19:05:53.662947] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.585 [2024-02-14 19:05:53.831937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.585 Running I/O for 1 seconds...[2024-02-14 19:05:53.832093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.585 [2024-02-14 19:05:53.832225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.585 [2024-02-14 19:05:53.832233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.958 00:05:17.958 lcore 0: 186360 00:05:17.958 lcore 1: 186361 00:05:17.958 lcore 2: 186360 00:05:17.958 lcore 3: 186359 00:05:17.958 done. 00:05:17.958 00:05:17.958 real 0m1.737s 00:05:17.958 user 0m4.513s 00:05:17.958 sys 0m0.101s 00:05:17.958 19:05:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.958 19:05:55 -- common/autotest_common.sh@10 -- # set +x 00:05:17.958 ************************************ 00:05:17.958 END TEST event_perf 00:05:17.958 ************************************ 00:05:17.958 19:05:55 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:17.958 19:05:55 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:05:17.958 19:05:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:17.958 19:05:55 -- common/autotest_common.sh@10 -- # set +x 00:05:17.958 ************************************ 00:05:17.958 START TEST event_reactor 00:05:17.958 ************************************ 00:05:17.958 19:05:55 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:17.958 [2024-02-14 19:05:55.276376] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:17.958 [2024-02-14 19:05:55.276558] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57696 ] 00:05:18.217 [2024-02-14 19:05:55.435037] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.217 [2024-02-14 19:05:55.619156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.593 test_start 00:05:19.593 oneshot 00:05:19.593 tick 100 00:05:19.593 tick 100 00:05:19.593 tick 250 00:05:19.593 tick 100 00:05:19.593 tick 100 00:05:19.593 tick 250 00:05:19.593 tick 500 00:05:19.593 tick 100 00:05:19.593 tick 100 00:05:19.593 tick 100 00:05:19.593 tick 250 00:05:19.593 tick 100 00:05:19.593 tick 100 00:05:19.593 test_end 00:05:19.593 00:05:19.593 real 0m1.720s 00:05:19.593 user 0m1.520s 00:05:19.593 sys 0m0.090s 00:05:19.593 ************************************ 00:05:19.593 END TEST event_reactor 00:05:19.593 ************************************ 00:05:19.593 19:05:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.593 19:05:56 -- common/autotest_common.sh@10 -- # set +x 00:05:19.593 19:05:57 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.593 19:05:57 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:05:19.593 19:05:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:19.593 19:05:57 -- common/autotest_common.sh@10 -- # set +x 00:05:19.851 ************************************ 00:05:19.851 START TEST event_reactor_perf 00:05:19.851 ************************************ 00:05:19.851 19:05:57 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.851 [2024-02-14 19:05:57.052428] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:19.851 [2024-02-14 19:05:57.052592] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57733 ] 00:05:19.851 [2024-02-14 19:05:57.210682] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.109 [2024-02-14 19:05:57.382550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.485 test_start 00:05:21.485 test_end 00:05:21.485 Performance: 307198 events per second 00:05:21.485 ************************************ 00:05:21.485 END TEST event_reactor_perf 00:05:21.485 ************************************ 00:05:21.485 00:05:21.485 real 0m1.682s 00:05:21.485 user 0m1.490s 00:05:21.485 sys 0m0.084s 00:05:21.485 19:05:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.485 19:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:21.485 19:05:58 -- event/event.sh@49 -- # uname -s 00:05:21.485 19:05:58 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:21.485 19:05:58 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:21.485 19:05:58 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:21.485 19:05:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:21.485 19:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:21.485 ************************************ 00:05:21.485 START TEST event_scheduler 00:05:21.485 ************************************ 00:05:21.485 19:05:58 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:21.485 * Looking for test storage... 00:05:21.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:21.485 19:05:58 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:21.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.485 19:05:58 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57794 00:05:21.485 19:05:58 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.485 19:05:58 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:21.485 19:05:58 -- scheduler/scheduler.sh@37 -- # waitforlisten 57794 00:05:21.485 19:05:58 -- common/autotest_common.sh@817 -- # '[' -z 57794 ']' 00:05:21.485 19:05:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.485 19:05:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:21.485 19:05:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.485 19:05:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:21.485 19:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:21.744 [2024-02-14 19:05:58.920898] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:21.744 [2024-02-14 19:05:58.921407] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57794 ] 00:05:21.744 [2024-02-14 19:05:59.096046] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.002 [2024-02-14 19:05:59.331300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.002 [2024-02-14 19:05:59.331467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.002 [2024-02-14 19:05:59.331900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.002 [2024-02-14 19:05:59.331906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.569 19:05:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:22.569 19:05:59 -- common/autotest_common.sh@850 -- # return 0 00:05:22.569 19:05:59 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:22.569 19:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.569 19:05:59 -- common/autotest_common.sh@10 -- # set +x 00:05:22.569 POWER: Env isn't set yet! 00:05:22.569 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:22.569 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.569 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.569 POWER: Attempting to initialise PSTAT power management... 00:05:22.569 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.569 POWER: Cannot set governor of lcore 0 to performance 00:05:22.569 POWER: Attempting to initialise AMD PSTATE power management... 00:05:22.569 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.569 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.569 POWER: Attempting to initialise CPPC power management... 00:05:22.569 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.569 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.569 POWER: Attempting to initialise VM power management... 00:05:22.569 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:22.569 POWER: Unable to set Power Management Environment for lcore 0 00:05:22.569 [2024-02-14 19:05:59.782831] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:22.569 [2024-02-14 19:05:59.782855] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:22.569 [2024-02-14 19:05:59.782870] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:22.569 19:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.569 19:05:59 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:22.569 19:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.569 19:05:59 -- common/autotest_common.sh@10 -- # set +x 00:05:22.827 [2024-02-14 19:06:00.044110] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:22.827 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.827 19:06:00 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:22.827 19:06:00 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:22.827 19:06:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 ************************************ 00:05:22.828 START TEST scheduler_create_thread 00:05:22.828 ************************************ 00:05:22.828 19:06:00 -- common/autotest_common.sh@1102 -- # scheduler_create_thread 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 2 00:05:22.828 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 3 00:05:22.828 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 4 00:05:22.828 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 5 00:05:22.828 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 6 00:05:22.828 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 7 00:05:22.828 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 8 00:05:22.828 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 9 00:05:22.828 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 10 00:05:22.828 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:22.828 19:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:22.828 19:06:00 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:22.828 19:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.828 19:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:24.203 19:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.203 00:05:24.203 real 0m1.179s 00:05:24.203 user 0m0.020s 00:05:24.203 sys 0m0.006s 00:05:24.203 19:06:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.203 19:06:01 -- common/autotest_common.sh@10 -- # set +x 00:05:24.203 ************************************ 00:05:24.203 END TEST scheduler_create_thread 00:05:24.203 ************************************ 00:05:24.203 19:06:01 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:24.203 19:06:01 -- scheduler/scheduler.sh@46 -- # killprocess 57794 00:05:24.203 19:06:01 -- common/autotest_common.sh@924 -- # '[' -z 57794 ']' 00:05:24.203 19:06:01 -- common/autotest_common.sh@928 -- # kill -0 57794 00:05:24.203 19:06:01 -- common/autotest_common.sh@929 -- # uname 00:05:24.203 19:06:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:24.203 19:06:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 57794 00:05:24.203 19:06:01 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:05:24.203 19:06:01 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:05:24.203 killing process with pid 57794 00:05:24.203 19:06:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 57794' 00:05:24.203 19:06:01 -- common/autotest_common.sh@943 -- # kill 57794 00:05:24.203 19:06:01 -- common/autotest_common.sh@948 -- # wait 57794 00:05:24.461 [2024-02-14 19:06:01.715810] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:25.400 00:05:25.400 real 0m4.045s 00:05:25.400 user 0m6.200s 00:05:25.400 sys 0m0.376s 00:05:25.400 19:06:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.400 19:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.400 ************************************ 00:05:25.400 END TEST event_scheduler 00:05:25.400 ************************************ 00:05:25.658 19:06:02 -- event/event.sh@51 -- # modprobe -n nbd 00:05:25.658 19:06:02 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:25.658 19:06:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:25.658 19:06:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:25.658 19:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.658 ************************************ 00:05:25.658 START TEST app_repeat 00:05:25.658 ************************************ 00:05:25.658 19:06:02 -- common/autotest_common.sh@1102 -- # app_repeat_test 00:05:25.658 19:06:02 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.658 19:06:02 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.658 19:06:02 -- event/event.sh@13 -- # local nbd_list 00:05:25.658 19:06:02 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.658 19:06:02 -- event/event.sh@14 -- # local bdev_list 00:05:25.658 19:06:02 -- event/event.sh@15 -- # local repeat_times=4 00:05:25.658 19:06:02 -- event/event.sh@17 -- # modprobe nbd 00:05:25.658 19:06:02 -- event/event.sh@19 -- # repeat_pid=57889 00:05:25.658 19:06:02 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:25.658 19:06:02 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.658 Process app_repeat pid: 57889 00:05:25.658 19:06:02 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57889' 00:05:25.659 19:06:02 -- event/event.sh@23 -- # for i in {0..2} 00:05:25.659 spdk_app_start Round 0 00:05:25.659 19:06:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:25.659 19:06:02 -- event/event.sh@25 -- # waitforlisten 57889 /var/tmp/spdk-nbd.sock 00:05:25.659 19:06:02 -- common/autotest_common.sh@817 -- # '[' -z 57889 ']' 00:05:25.659 19:06:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.659 19:06:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.659 19:06:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.659 19:06:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.659 19:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.659 [2024-02-14 19:06:02.899150] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:25.659 [2024-02-14 19:06:02.899816] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57889 ] 00:05:25.659 [2024-02-14 19:06:03.057328] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.918 [2024-02-14 19:06:03.229037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.918 [2024-02-14 19:06:03.229049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.484 19:06:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:26.484 19:06:03 -- common/autotest_common.sh@850 -- # return 0 00:05:26.484 19:06:03 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.742 Malloc0 00:05:26.742 19:06:04 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.001 Malloc1 00:05:27.001 19:06:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.001 19:06:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.001 19:06:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.001 19:06:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.001 19:06:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.001 19:06:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.001 19:06:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.002 19:06:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.002 19:06:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.002 19:06:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.002 19:06:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.002 19:06:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.002 19:06:04 -- bdev/nbd_common.sh@12 -- # local i 00:05:27.002 19:06:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.002 19:06:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.002 19:06:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.260 /dev/nbd0 00:05:27.260 19:06:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.260 19:06:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.260 19:06:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:27.260 19:06:04 -- common/autotest_common.sh@855 -- # local i 00:05:27.260 19:06:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:27.260 19:06:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:27.260 19:06:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:27.260 19:06:04 -- common/autotest_common.sh@859 -- # break 00:05:27.260 19:06:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:27.260 19:06:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:27.260 19:06:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.260 1+0 records in 00:05:27.260 1+0 records out 00:05:27.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032445 s, 12.6 MB/s 00:05:27.260 19:06:04 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.260 19:06:04 -- common/autotest_common.sh@872 -- # size=4096 00:05:27.260 19:06:04 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.260 19:06:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:27.260 19:06:04 -- common/autotest_common.sh@875 -- # return 0 00:05:27.260 19:06:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.260 19:06:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.260 19:06:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.519 /dev/nbd1 00:05:27.777 19:06:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.777 19:06:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.777 19:06:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:27.777 19:06:04 -- common/autotest_common.sh@855 -- # local i 00:05:27.777 19:06:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:27.777 19:06:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:27.777 19:06:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:27.777 19:06:04 -- common/autotest_common.sh@859 -- # break 00:05:27.777 19:06:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:27.777 19:06:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:27.777 19:06:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.777 1+0 records in 00:05:27.777 1+0 records out 00:05:27.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393474 s, 10.4 MB/s 00:05:27.777 19:06:04 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.777 19:06:04 -- common/autotest_common.sh@872 -- # size=4096 00:05:27.777 19:06:04 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.777 19:06:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:27.777 19:06:04 -- common/autotest_common.sh@875 -- # return 0 00:05:27.777 19:06:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.777 19:06:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.777 19:06:04 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.777 19:06:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.777 19:06:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.055 { 00:05:28.055 "nbd_device": "/dev/nbd0", 00:05:28.055 "bdev_name": "Malloc0" 00:05:28.055 }, 00:05:28.055 { 00:05:28.055 "nbd_device": "/dev/nbd1", 00:05:28.055 "bdev_name": "Malloc1" 00:05:28.055 } 00:05:28.055 ]' 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.055 { 00:05:28.055 "nbd_device": "/dev/nbd0", 00:05:28.055 "bdev_name": "Malloc0" 00:05:28.055 }, 00:05:28.055 { 00:05:28.055 "nbd_device": "/dev/nbd1", 00:05:28.055 "bdev_name": "Malloc1" 00:05:28.055 } 00:05:28.055 ]' 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.055 /dev/nbd1' 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.055 /dev/nbd1' 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.055 256+0 records in 00:05:28.055 256+0 records out 00:05:28.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00705139 s, 149 MB/s 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.055 256+0 records in 00:05:28.055 256+0 records out 00:05:28.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260982 s, 40.2 MB/s 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.055 256+0 records in 00:05:28.055 256+0 records out 00:05:28.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313126 s, 33.5 MB/s 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@51 -- # local i 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.055 19:06:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.315 19:06:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.315 19:06:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.315 19:06:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.315 19:06:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.315 19:06:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.315 19:06:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.315 19:06:05 -- bdev/nbd_common.sh@41 -- # break 00:05:28.315 19:06:05 -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.315 19:06:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.315 19:06:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.574 19:06:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.574 19:06:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.574 19:06:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.574 19:06:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.574 19:06:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.574 19:06:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.574 19:06:05 -- bdev/nbd_common.sh@41 -- # break 00:05:28.574 19:06:05 -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.574 19:06:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.574 19:06:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.574 19:06:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.833 19:06:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.833 19:06:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.833 19:06:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.833 19:06:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.833 19:06:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.833 19:06:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.833 19:06:06 -- bdev/nbd_common.sh@65 -- # true 00:05:28.833 19:06:06 -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.833 19:06:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.833 19:06:06 -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.833 19:06:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.833 19:06:06 -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.833 19:06:06 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.402 19:06:06 -- event/event.sh@35 -- # sleep 3 00:05:30.339 [2024-02-14 19:06:07.700795] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.597 [2024-02-14 19:06:07.866269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.597 [2024-02-14 19:06:07.866277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.856 [2024-02-14 19:06:08.025008] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.856 [2024-02-14 19:06:08.025131] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.759 19:06:09 -- event/event.sh@23 -- # for i in {0..2} 00:05:32.759 spdk_app_start Round 1 00:05:32.759 19:06:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:32.759 19:06:09 -- event/event.sh@25 -- # waitforlisten 57889 /var/tmp/spdk-nbd.sock 00:05:32.759 19:06:09 -- common/autotest_common.sh@817 -- # '[' -z 57889 ']' 00:05:32.759 19:06:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.759 19:06:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.759 19:06:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.759 19:06:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.759 19:06:09 -- common/autotest_common.sh@10 -- # set +x 00:05:32.759 19:06:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.759 19:06:09 -- common/autotest_common.sh@850 -- # return 0 00:05:32.759 19:06:09 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.017 Malloc0 00:05:33.017 19:06:10 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.275 Malloc1 00:05:33.275 19:06:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@12 -- # local i 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.275 19:06:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.533 /dev/nbd0 00:05:33.533 19:06:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.533 19:06:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.533 19:06:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:33.533 19:06:10 -- common/autotest_common.sh@855 -- # local i 00:05:33.533 19:06:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:33.533 19:06:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:33.533 19:06:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:33.533 19:06:10 -- common/autotest_common.sh@859 -- # break 00:05:33.533 19:06:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:33.534 19:06:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:33.534 19:06:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.534 1+0 records in 00:05:33.534 1+0 records out 00:05:33.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440868 s, 9.3 MB/s 00:05:33.534 19:06:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.534 19:06:10 -- common/autotest_common.sh@872 -- # size=4096 00:05:33.534 19:06:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.534 19:06:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:33.534 19:06:10 -- common/autotest_common.sh@875 -- # return 0 00:05:33.534 19:06:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.534 19:06:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.534 19:06:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.792 /dev/nbd1 00:05:33.792 19:06:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.792 19:06:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.792 19:06:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:33.792 19:06:11 -- common/autotest_common.sh@855 -- # local i 00:05:33.792 19:06:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:33.792 19:06:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:33.792 19:06:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:33.792 19:06:11 -- common/autotest_common.sh@859 -- # break 00:05:33.792 19:06:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:33.792 19:06:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:33.792 19:06:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.792 1+0 records in 00:05:33.792 1+0 records out 00:05:33.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240981 s, 17.0 MB/s 00:05:33.792 19:06:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.792 19:06:11 -- common/autotest_common.sh@872 -- # size=4096 00:05:33.792 19:06:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.792 19:06:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:33.792 19:06:11 -- common/autotest_common.sh@875 -- # return 0 00:05:33.792 19:06:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.792 19:06:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.792 19:06:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.792 19:06:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.792 19:06:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.050 { 00:05:34.050 "nbd_device": "/dev/nbd0", 00:05:34.050 "bdev_name": "Malloc0" 00:05:34.050 }, 00:05:34.050 { 00:05:34.050 "nbd_device": "/dev/nbd1", 00:05:34.050 "bdev_name": "Malloc1" 00:05:34.050 } 00:05:34.050 ]' 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.050 { 00:05:34.050 "nbd_device": "/dev/nbd0", 00:05:34.050 "bdev_name": "Malloc0" 00:05:34.050 }, 00:05:34.050 { 00:05:34.050 "nbd_device": "/dev/nbd1", 00:05:34.050 "bdev_name": "Malloc1" 00:05:34.050 } 00:05:34.050 ]' 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.050 /dev/nbd1' 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.050 /dev/nbd1' 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.050 256+0 records in 00:05:34.050 256+0 records out 00:05:34.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00995839 s, 105 MB/s 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.050 256+0 records in 00:05:34.050 256+0 records out 00:05:34.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299519 s, 35.0 MB/s 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.050 19:06:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.308 256+0 records in 00:05:34.308 256+0 records out 00:05:34.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029818 s, 35.2 MB/s 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@51 -- # local i 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@41 -- # break 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.308 19:06:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.566 19:06:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.566 19:06:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.566 19:06:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.566 19:06:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.566 19:06:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.566 19:06:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.566 19:06:11 -- bdev/nbd_common.sh@41 -- # break 00:05:34.566 19:06:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.566 19:06:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.566 19:06:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.566 19:06:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.824 19:06:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.824 19:06:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.824 19:06:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.083 19:06:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.083 19:06:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.083 19:06:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.083 19:06:12 -- bdev/nbd_common.sh@65 -- # true 00:05:35.083 19:06:12 -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.083 19:06:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.083 19:06:12 -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.083 19:06:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.083 19:06:12 -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.083 19:06:12 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.342 19:06:12 -- event/event.sh@35 -- # sleep 3 00:05:36.725 [2024-02-14 19:06:13.738476] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.725 [2024-02-14 19:06:13.908753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.725 [2024-02-14 19:06:13.908759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.725 [2024-02-14 19:06:14.066054] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.725 [2024-02-14 19:06:14.066178] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.629 spdk_app_start Round 2 00:05:38.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.629 19:06:15 -- event/event.sh@23 -- # for i in {0..2} 00:05:38.629 19:06:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:38.629 19:06:15 -- event/event.sh@25 -- # waitforlisten 57889 /var/tmp/spdk-nbd.sock 00:05:38.629 19:06:15 -- common/autotest_common.sh@817 -- # '[' -z 57889 ']' 00:05:38.629 19:06:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.629 19:06:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:38.629 19:06:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.629 19:06:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:38.629 19:06:15 -- common/autotest_common.sh@10 -- # set +x 00:05:38.629 19:06:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:38.629 19:06:15 -- common/autotest_common.sh@850 -- # return 0 00:05:38.629 19:06:15 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.887 Malloc0 00:05:38.887 19:06:16 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.146 Malloc1 00:05:39.146 19:06:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@12 -- # local i 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.146 19:06:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.405 /dev/nbd0 00:05:39.405 19:06:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.405 19:06:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.405 19:06:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:39.405 19:06:16 -- common/autotest_common.sh@855 -- # local i 00:05:39.405 19:06:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:39.405 19:06:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:39.405 19:06:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:39.405 19:06:16 -- common/autotest_common.sh@859 -- # break 00:05:39.405 19:06:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:39.405 19:06:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:39.405 19:06:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.405 1+0 records in 00:05:39.405 1+0 records out 00:05:39.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316164 s, 13.0 MB/s 00:05:39.405 19:06:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.405 19:06:16 -- common/autotest_common.sh@872 -- # size=4096 00:05:39.405 19:06:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.405 19:06:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:39.405 19:06:16 -- common/autotest_common.sh@875 -- # return 0 00:05:39.405 19:06:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.405 19:06:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.405 19:06:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.665 /dev/nbd1 00:05:39.665 19:06:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.665 19:06:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.665 19:06:17 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:39.665 19:06:17 -- common/autotest_common.sh@855 -- # local i 00:05:39.665 19:06:17 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:39.665 19:06:17 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:39.665 19:06:17 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:39.665 19:06:17 -- common/autotest_common.sh@859 -- # break 00:05:39.665 19:06:17 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:39.665 19:06:17 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:39.665 19:06:17 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.665 1+0 records in 00:05:39.665 1+0 records out 00:05:39.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684815 s, 6.0 MB/s 00:05:39.665 19:06:17 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.926 19:06:17 -- common/autotest_common.sh@872 -- # size=4096 00:05:39.926 19:06:17 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.926 19:06:17 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:39.926 19:06:17 -- common/autotest_common.sh@875 -- # return 0 00:05:39.926 19:06:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.926 19:06:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.926 19:06:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.926 19:06:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.926 19:06:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.184 19:06:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.184 { 00:05:40.184 "nbd_device": "/dev/nbd0", 00:05:40.184 "bdev_name": "Malloc0" 00:05:40.184 }, 00:05:40.184 { 00:05:40.184 "nbd_device": "/dev/nbd1", 00:05:40.184 "bdev_name": "Malloc1" 00:05:40.184 } 00:05:40.184 ]' 00:05:40.184 19:06:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.184 { 00:05:40.184 "nbd_device": "/dev/nbd0", 00:05:40.184 "bdev_name": "Malloc0" 00:05:40.184 }, 00:05:40.184 { 00:05:40.184 "nbd_device": "/dev/nbd1", 00:05:40.184 "bdev_name": "Malloc1" 00:05:40.184 } 00:05:40.184 ]' 00:05:40.184 19:06:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.184 19:06:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.184 /dev/nbd1' 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.185 /dev/nbd1' 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.185 256+0 records in 00:05:40.185 256+0 records out 00:05:40.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00854657 s, 123 MB/s 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.185 256+0 records in 00:05:40.185 256+0 records out 00:05:40.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260009 s, 40.3 MB/s 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.185 256+0 records in 00:05:40.185 256+0 records out 00:05:40.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270866 s, 38.7 MB/s 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@51 -- # local i 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.185 19:06:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.444 19:06:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.444 19:06:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.444 19:06:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.444 19:06:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.444 19:06:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.444 19:06:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.444 19:06:17 -- bdev/nbd_common.sh@41 -- # break 00:05:40.444 19:06:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.444 19:06:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.444 19:06:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.703 19:06:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.703 19:06:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.703 19:06:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.703 19:06:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.703 19:06:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.703 19:06:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.703 19:06:18 -- bdev/nbd_common.sh@41 -- # break 00:05:40.703 19:06:18 -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.703 19:06:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.703 19:06:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.703 19:06:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.961 19:06:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.961 19:06:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.961 19:06:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.961 19:06:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.962 19:06:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.962 19:06:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.962 19:06:18 -- bdev/nbd_common.sh@65 -- # true 00:05:40.962 19:06:18 -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.962 19:06:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.962 19:06:18 -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.962 19:06:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.962 19:06:18 -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.962 19:06:18 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.529 19:06:18 -- event/event.sh@35 -- # sleep 3 00:05:42.463 [2024-02-14 19:06:19.817387] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.721 [2024-02-14 19:06:19.987234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.721 [2024-02-14 19:06:19.987239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.980 [2024-02-14 19:06:20.152595] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.980 [2024-02-14 19:06:20.152670] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.377 19:06:21 -- event/event.sh@38 -- # waitforlisten 57889 /var/tmp/spdk-nbd.sock 00:05:44.377 19:06:21 -- common/autotest_common.sh@817 -- # '[' -z 57889 ']' 00:05:44.377 19:06:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.377 19:06:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:44.377 19:06:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.377 19:06:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:44.377 19:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:44.636 19:06:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.636 19:06:22 -- common/autotest_common.sh@850 -- # return 0 00:05:44.636 19:06:22 -- event/event.sh@39 -- # killprocess 57889 00:05:44.636 19:06:22 -- common/autotest_common.sh@924 -- # '[' -z 57889 ']' 00:05:44.636 19:06:22 -- common/autotest_common.sh@928 -- # kill -0 57889 00:05:44.636 19:06:22 -- common/autotest_common.sh@929 -- # uname 00:05:44.636 19:06:22 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:44.636 19:06:22 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 57889 00:05:44.895 killing process with pid 57889 00:05:44.895 19:06:22 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:44.895 19:06:22 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:44.895 19:06:22 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 57889' 00:05:44.895 19:06:22 -- common/autotest_common.sh@943 -- # kill 57889 00:05:44.895 19:06:22 -- common/autotest_common.sh@948 -- # wait 57889 00:05:45.832 spdk_app_start is called in Round 0. 00:05:45.832 Shutdown signal received, stop current app iteration 00:05:45.832 Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 reinitialization... 00:05:45.832 spdk_app_start is called in Round 1. 00:05:45.832 Shutdown signal received, stop current app iteration 00:05:45.833 Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 reinitialization... 00:05:45.833 spdk_app_start is called in Round 2. 00:05:45.833 Shutdown signal received, stop current app iteration 00:05:45.833 Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 reinitialization... 00:05:45.833 spdk_app_start is called in Round 3. 00:05:45.833 Shutdown signal received, stop current app iteration 00:05:45.833 19:06:23 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:45.833 19:06:23 -- event/event.sh@42 -- # return 0 00:05:45.833 00:05:45.833 real 0m20.189s 00:05:45.833 user 0m43.640s 00:05:45.833 sys 0m2.596s 00:05:45.833 19:06:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.833 19:06:23 -- common/autotest_common.sh@10 -- # set +x 00:05:45.833 ************************************ 00:05:45.833 END TEST app_repeat 00:05:45.833 ************************************ 00:05:45.833 19:06:23 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:45.833 19:06:23 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.833 19:06:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:45.833 19:06:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:45.833 19:06:23 -- common/autotest_common.sh@10 -- # set +x 00:05:45.833 ************************************ 00:05:45.833 START TEST cpu_locks 00:05:45.833 ************************************ 00:05:45.833 19:06:23 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.833 * Looking for test storage... 00:05:45.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:45.833 19:06:23 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:45.833 19:06:23 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:45.833 19:06:23 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:45.833 19:06:23 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:45.833 19:06:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:45.833 19:06:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:45.833 19:06:23 -- common/autotest_common.sh@10 -- # set +x 00:05:45.833 ************************************ 00:05:45.833 START TEST default_locks 00:05:45.833 ************************************ 00:05:45.833 19:06:23 -- common/autotest_common.sh@1102 -- # default_locks 00:05:45.833 19:06:23 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58333 00:05:45.833 19:06:23 -- event/cpu_locks.sh@47 -- # waitforlisten 58333 00:05:45.833 19:06:23 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.833 19:06:23 -- common/autotest_common.sh@817 -- # '[' -z 58333 ']' 00:05:45.833 19:06:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.833 19:06:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.833 19:06:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.833 19:06:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.833 19:06:23 -- common/autotest_common.sh@10 -- # set +x 00:05:46.091 [2024-02-14 19:06:23.303672] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:46.091 [2024-02-14 19:06:23.303827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58333 ] 00:05:46.091 [2024-02-14 19:06:23.472810] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.350 [2024-02-14 19:06:23.652856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.350 [2024-02-14 19:06:23.653114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.727 19:06:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:47.727 19:06:24 -- common/autotest_common.sh@850 -- # return 0 00:05:47.727 19:06:24 -- event/cpu_locks.sh@49 -- # locks_exist 58333 00:05:47.727 19:06:24 -- event/cpu_locks.sh@22 -- # lslocks -p 58333 00:05:47.727 19:06:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.985 19:06:25 -- event/cpu_locks.sh@50 -- # killprocess 58333 00:05:47.986 19:06:25 -- common/autotest_common.sh@924 -- # '[' -z 58333 ']' 00:05:47.986 19:06:25 -- common/autotest_common.sh@928 -- # kill -0 58333 00:05:47.986 19:06:25 -- common/autotest_common.sh@929 -- # uname 00:05:47.986 19:06:25 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:47.986 19:06:25 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58333 00:05:48.244 19:06:25 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:48.244 killing process with pid 58333 00:05:48.244 19:06:25 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:48.244 19:06:25 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58333' 00:05:48.244 19:06:25 -- common/autotest_common.sh@943 -- # kill 58333 00:05:48.244 19:06:25 -- common/autotest_common.sh@948 -- # wait 58333 00:05:50.147 19:06:27 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58333 00:05:50.147 19:06:27 -- common/autotest_common.sh@638 -- # local es=0 00:05:50.147 19:06:27 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 58333 00:05:50.147 19:06:27 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:50.147 19:06:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:50.147 19:06:27 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:50.147 19:06:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:50.147 19:06:27 -- common/autotest_common.sh@641 -- # waitforlisten 58333 00:05:50.147 19:06:27 -- common/autotest_common.sh@817 -- # '[' -z 58333 ']' 00:05:50.147 19:06:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.147 19:06:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.147 ERROR: process (pid: 58333) is no longer running 00:05:50.147 19:06:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.147 19:06:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.147 19:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:50.147 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (58333) - No such process 00:05:50.147 19:06:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.147 19:06:27 -- common/autotest_common.sh@850 -- # return 1 00:05:50.147 19:06:27 -- common/autotest_common.sh@641 -- # es=1 00:05:50.147 19:06:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:50.147 19:06:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:50.147 19:06:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:50.147 19:06:27 -- event/cpu_locks.sh@54 -- # no_locks 00:05:50.147 19:06:27 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.147 19:06:27 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.147 19:06:27 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.147 00:05:50.147 real 0m4.073s 00:05:50.147 user 0m4.403s 00:05:50.147 sys 0m0.594s 00:05:50.147 19:06:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.147 19:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:50.147 ************************************ 00:05:50.147 END TEST default_locks 00:05:50.147 ************************************ 00:05:50.147 19:06:27 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:50.147 19:06:27 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:50.147 19:06:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:50.147 19:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:50.147 ************************************ 00:05:50.147 START TEST default_locks_via_rpc 00:05:50.147 ************************************ 00:05:50.147 19:06:27 -- common/autotest_common.sh@1102 -- # default_locks_via_rpc 00:05:50.147 19:06:27 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58410 00:05:50.147 19:06:27 -- event/cpu_locks.sh@63 -- # waitforlisten 58410 00:05:50.147 19:06:27 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.147 19:06:27 -- common/autotest_common.sh@817 -- # '[' -z 58410 ']' 00:05:50.147 19:06:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.147 19:06:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.147 19:06:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.147 19:06:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.147 19:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:50.147 [2024-02-14 19:06:27.427256] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:50.147 [2024-02-14 19:06:27.427435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58410 ] 00:05:50.406 [2024-02-14 19:06:27.602287] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.406 [2024-02-14 19:06:27.756701] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.406 [2024-02-14 19:06:27.756954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.783 19:06:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:51.783 19:06:29 -- common/autotest_common.sh@850 -- # return 0 00:05:51.783 19:06:29 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:51.783 19:06:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.783 19:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:51.783 19:06:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:51.783 19:06:29 -- event/cpu_locks.sh@67 -- # no_locks 00:05:51.783 19:06:29 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.783 19:06:29 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.783 19:06:29 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.783 19:06:29 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:51.783 19:06:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.783 19:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:51.783 19:06:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:51.783 19:06:29 -- event/cpu_locks.sh@71 -- # locks_exist 58410 00:05:51.783 19:06:29 -- event/cpu_locks.sh@22 -- # lslocks -p 58410 00:05:51.783 19:06:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.350 19:06:29 -- event/cpu_locks.sh@73 -- # killprocess 58410 00:05:52.350 19:06:29 -- common/autotest_common.sh@924 -- # '[' -z 58410 ']' 00:05:52.350 19:06:29 -- common/autotest_common.sh@928 -- # kill -0 58410 00:05:52.350 19:06:29 -- common/autotest_common.sh@929 -- # uname 00:05:52.350 19:06:29 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:52.350 19:06:29 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58410 00:05:52.350 killing process with pid 58410 00:05:52.350 19:06:29 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:52.350 19:06:29 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:52.350 19:06:29 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58410' 00:05:52.350 19:06:29 -- common/autotest_common.sh@943 -- # kill 58410 00:05:52.350 19:06:29 -- common/autotest_common.sh@948 -- # wait 58410 00:05:54.253 00:05:54.253 real 0m4.037s 00:05:54.253 user 0m4.364s 00:05:54.253 sys 0m0.586s 00:05:54.253 19:06:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.253 19:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.253 ************************************ 00:05:54.253 END TEST default_locks_via_rpc 00:05:54.253 ************************************ 00:05:54.253 19:06:31 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:54.253 19:06:31 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:54.253 19:06:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:54.253 19:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.253 ************************************ 00:05:54.253 START TEST non_locking_app_on_locked_coremask 00:05:54.253 ************************************ 00:05:54.253 19:06:31 -- common/autotest_common.sh@1102 -- # non_locking_app_on_locked_coremask 00:05:54.253 19:06:31 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58488 00:05:54.253 19:06:31 -- event/cpu_locks.sh@81 -- # waitforlisten 58488 /var/tmp/spdk.sock 00:05:54.253 19:06:31 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.253 19:06:31 -- common/autotest_common.sh@817 -- # '[' -z 58488 ']' 00:05:54.253 19:06:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.253 19:06:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.253 19:06:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.253 19:06:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.253 19:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.253 [2024-02-14 19:06:31.504038] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:54.253 [2024-02-14 19:06:31.504173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58488 ] 00:05:54.253 [2024-02-14 19:06:31.664984] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.512 [2024-02-14 19:06:31.821788] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.512 [2024-02-14 19:06:31.822037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.887 19:06:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:55.887 19:06:33 -- common/autotest_common.sh@850 -- # return 0 00:05:55.887 19:06:33 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58517 00:05:55.887 19:06:33 -- event/cpu_locks.sh@85 -- # waitforlisten 58517 /var/tmp/spdk2.sock 00:05:55.887 19:06:33 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:55.887 19:06:33 -- common/autotest_common.sh@817 -- # '[' -z 58517 ']' 00:05:55.887 19:06:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.887 19:06:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:55.887 19:06:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.887 19:06:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:55.887 19:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:55.887 [2024-02-14 19:06:33.252423] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:55.887 [2024-02-14 19:06:33.252948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58517 ] 00:05:56.145 [2024-02-14 19:06:33.421802] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.145 [2024-02-14 19:06:33.421893] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.404 [2024-02-14 19:06:33.781048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.404 [2024-02-14 19:06:33.781285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.308 19:06:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.308 19:06:35 -- common/autotest_common.sh@850 -- # return 0 00:05:58.308 19:06:35 -- event/cpu_locks.sh@87 -- # locks_exist 58488 00:05:58.308 19:06:35 -- event/cpu_locks.sh@22 -- # lslocks -p 58488 00:05:58.308 19:06:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.875 19:06:36 -- event/cpu_locks.sh@89 -- # killprocess 58488 00:05:58.875 19:06:36 -- common/autotest_common.sh@924 -- # '[' -z 58488 ']' 00:05:58.875 19:06:36 -- common/autotest_common.sh@928 -- # kill -0 58488 00:05:58.875 19:06:36 -- common/autotest_common.sh@929 -- # uname 00:05:58.875 19:06:36 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:58.875 19:06:36 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58488 00:05:58.875 killing process with pid 58488 00:05:58.875 19:06:36 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:58.875 19:06:36 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:58.875 19:06:36 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58488' 00:05:58.875 19:06:36 -- common/autotest_common.sh@943 -- # kill 58488 00:05:58.875 19:06:36 -- common/autotest_common.sh@948 -- # wait 58488 00:06:03.065 19:06:40 -- event/cpu_locks.sh@90 -- # killprocess 58517 00:06:03.065 19:06:40 -- common/autotest_common.sh@924 -- # '[' -z 58517 ']' 00:06:03.065 19:06:40 -- common/autotest_common.sh@928 -- # kill -0 58517 00:06:03.065 19:06:40 -- common/autotest_common.sh@929 -- # uname 00:06:03.065 19:06:40 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:03.065 19:06:40 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58517 00:06:03.065 killing process with pid 58517 00:06:03.065 19:06:40 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:03.065 19:06:40 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:03.065 19:06:40 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58517' 00:06:03.065 19:06:40 -- common/autotest_common.sh@943 -- # kill 58517 00:06:03.065 19:06:40 -- common/autotest_common.sh@948 -- # wait 58517 00:06:04.969 ************************************ 00:06:04.969 END TEST non_locking_app_on_locked_coremask 00:06:04.969 ************************************ 00:06:04.969 00:06:04.969 real 0m10.459s 00:06:04.969 user 0m11.442s 00:06:04.969 sys 0m1.143s 00:06:04.969 19:06:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.969 19:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:04.969 19:06:41 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:04.969 19:06:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:04.969 19:06:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:04.969 19:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:04.969 ************************************ 00:06:04.969 START TEST locking_app_on_unlocked_coremask 00:06:04.969 ************************************ 00:06:04.969 19:06:41 -- common/autotest_common.sh@1102 -- # locking_app_on_unlocked_coremask 00:06:04.969 19:06:41 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58651 00:06:04.969 19:06:41 -- event/cpu_locks.sh@99 -- # waitforlisten 58651 /var/tmp/spdk.sock 00:06:04.969 19:06:41 -- common/autotest_common.sh@817 -- # '[' -z 58651 ']' 00:06:04.969 19:06:41 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:04.969 19:06:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.969 19:06:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:04.969 19:06:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.969 19:06:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:04.969 19:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:04.969 [2024-02-14 19:06:42.046928] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:04.969 [2024-02-14 19:06:42.047835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58651 ] 00:06:04.969 [2024-02-14 19:06:42.224541] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.969 [2024-02-14 19:06:42.224605] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.969 [2024-02-14 19:06:42.386234] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.969 [2024-02-14 19:06:42.386477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.347 19:06:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:06.347 19:06:43 -- common/autotest_common.sh@850 -- # return 0 00:06:06.347 19:06:43 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58674 00:06:06.347 19:06:43 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.347 19:06:43 -- event/cpu_locks.sh@103 -- # waitforlisten 58674 /var/tmp/spdk2.sock 00:06:06.347 19:06:43 -- common/autotest_common.sh@817 -- # '[' -z 58674 ']' 00:06:06.347 19:06:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.347 19:06:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:06.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.347 19:06:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.347 19:06:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:06.347 19:06:43 -- common/autotest_common.sh@10 -- # set +x 00:06:06.347 [2024-02-14 19:06:43.710937] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:06.347 [2024-02-14 19:06:43.711114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58674 ] 00:06:06.606 [2024-02-14 19:06:43.888287] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.865 [2024-02-14 19:06:44.216683] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.865 [2024-02-14 19:06:44.216917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.768 19:06:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:08.768 19:06:46 -- common/autotest_common.sh@850 -- # return 0 00:06:08.768 19:06:46 -- event/cpu_locks.sh@105 -- # locks_exist 58674 00:06:08.768 19:06:46 -- event/cpu_locks.sh@22 -- # lslocks -p 58674 00:06:08.768 19:06:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.346 19:06:46 -- event/cpu_locks.sh@107 -- # killprocess 58651 00:06:09.346 19:06:46 -- common/autotest_common.sh@924 -- # '[' -z 58651 ']' 00:06:09.346 19:06:46 -- common/autotest_common.sh@928 -- # kill -0 58651 00:06:09.346 19:06:46 -- common/autotest_common.sh@929 -- # uname 00:06:09.346 19:06:46 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:09.346 19:06:46 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58651 00:06:09.346 19:06:46 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:09.346 killing process with pid 58651 00:06:09.346 19:06:46 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:09.346 19:06:46 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58651' 00:06:09.346 19:06:46 -- common/autotest_common.sh@943 -- # kill 58651 00:06:09.346 19:06:46 -- common/autotest_common.sh@948 -- # wait 58651 00:06:13.541 19:06:50 -- event/cpu_locks.sh@108 -- # killprocess 58674 00:06:13.541 19:06:50 -- common/autotest_common.sh@924 -- # '[' -z 58674 ']' 00:06:13.541 19:06:50 -- common/autotest_common.sh@928 -- # kill -0 58674 00:06:13.541 19:06:50 -- common/autotest_common.sh@929 -- # uname 00:06:13.541 19:06:50 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:13.541 19:06:50 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58674 00:06:13.541 killing process with pid 58674 00:06:13.541 19:06:50 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:13.541 19:06:50 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:13.541 19:06:50 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58674' 00:06:13.541 19:06:50 -- common/autotest_common.sh@943 -- # kill 58674 00:06:13.541 19:06:50 -- common/autotest_common.sh@948 -- # wait 58674 00:06:14.918 ************************************ 00:06:14.918 END TEST locking_app_on_unlocked_coremask 00:06:14.918 ************************************ 00:06:14.918 00:06:14.918 real 0m10.335s 00:06:14.918 user 0m11.297s 00:06:14.918 sys 0m1.109s 00:06:14.918 19:06:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.918 19:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:14.918 19:06:52 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:14.918 19:06:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:14.918 19:06:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:14.918 19:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:14.918 ************************************ 00:06:14.918 START TEST locking_app_on_locked_coremask 00:06:14.918 ************************************ 00:06:14.918 19:06:52 -- common/autotest_common.sh@1102 -- # locking_app_on_locked_coremask 00:06:14.918 19:06:52 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58809 00:06:14.918 19:06:52 -- event/cpu_locks.sh@116 -- # waitforlisten 58809 /var/tmp/spdk.sock 00:06:14.918 19:06:52 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.918 19:06:52 -- common/autotest_common.sh@817 -- # '[' -z 58809 ']' 00:06:14.918 19:06:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.918 19:06:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:14.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.918 19:06:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.918 19:06:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:14.918 19:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.177 [2024-02-14 19:06:52.436661] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:15.177 [2024-02-14 19:06:52.436835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58809 ] 00:06:15.436 [2024-02-14 19:06:52.608135] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.436 [2024-02-14 19:06:52.770799] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.436 [2024-02-14 19:06:52.771009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.814 19:06:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:16.814 19:06:54 -- common/autotest_common.sh@850 -- # return 0 00:06:16.814 19:06:54 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58838 00:06:16.814 19:06:54 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58838 /var/tmp/spdk2.sock 00:06:16.814 19:06:54 -- common/autotest_common.sh@638 -- # local es=0 00:06:16.814 19:06:54 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.814 19:06:54 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 58838 /var/tmp/spdk2.sock 00:06:16.814 19:06:54 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:16.814 19:06:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:16.814 19:06:54 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:16.814 19:06:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:16.814 19:06:54 -- common/autotest_common.sh@641 -- # waitforlisten 58838 /var/tmp/spdk2.sock 00:06:16.814 19:06:54 -- common/autotest_common.sh@817 -- # '[' -z 58838 ']' 00:06:16.814 19:06:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.814 19:06:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:16.814 19:06:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.814 19:06:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:16.814 19:06:54 -- common/autotest_common.sh@10 -- # set +x 00:06:16.814 [2024-02-14 19:06:54.179049] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:16.814 [2024-02-14 19:06:54.179240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58838 ] 00:06:17.073 [2024-02-14 19:06:54.354038] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58809 has claimed it. 00:06:17.073 [2024-02-14 19:06:54.354128] app.c: 789:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.640 ERROR: process (pid: 58838) is no longer running 00:06:17.640 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (58838) - No such process 00:06:17.640 19:06:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:17.640 19:06:54 -- common/autotest_common.sh@850 -- # return 1 00:06:17.640 19:06:54 -- common/autotest_common.sh@641 -- # es=1 00:06:17.640 19:06:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:17.640 19:06:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:17.640 19:06:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:17.640 19:06:54 -- event/cpu_locks.sh@122 -- # locks_exist 58809 00:06:17.640 19:06:54 -- event/cpu_locks.sh@22 -- # lslocks -p 58809 00:06:17.640 19:06:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.899 19:06:55 -- event/cpu_locks.sh@124 -- # killprocess 58809 00:06:17.899 19:06:55 -- common/autotest_common.sh@924 -- # '[' -z 58809 ']' 00:06:17.899 19:06:55 -- common/autotest_common.sh@928 -- # kill -0 58809 00:06:17.899 19:06:55 -- common/autotest_common.sh@929 -- # uname 00:06:17.899 19:06:55 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:17.899 19:06:55 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58809 00:06:17.899 19:06:55 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:17.899 killing process with pid 58809 00:06:17.899 19:06:55 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:17.899 19:06:55 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58809' 00:06:17.899 19:06:55 -- common/autotest_common.sh@943 -- # kill 58809 00:06:17.899 19:06:55 -- common/autotest_common.sh@948 -- # wait 58809 00:06:19.804 00:06:19.804 real 0m4.715s 00:06:19.804 user 0m5.276s 00:06:19.804 sys 0m0.711s 00:06:19.804 ************************************ 00:06:19.804 END TEST locking_app_on_locked_coremask 00:06:19.804 ************************************ 00:06:19.804 19:06:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.804 19:06:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.804 19:06:57 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:19.804 19:06:57 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:19.804 19:06:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:19.804 19:06:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.804 ************************************ 00:06:19.804 START TEST locking_overlapped_coremask 00:06:19.804 ************************************ 00:06:19.804 19:06:57 -- common/autotest_common.sh@1102 -- # locking_overlapped_coremask 00:06:19.804 19:06:57 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58902 00:06:19.804 19:06:57 -- event/cpu_locks.sh@133 -- # waitforlisten 58902 /var/tmp/spdk.sock 00:06:19.804 19:06:57 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:19.804 19:06:57 -- common/autotest_common.sh@817 -- # '[' -z 58902 ']' 00:06:19.804 19:06:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.804 19:06:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:19.804 19:06:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.804 19:06:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:19.804 19:06:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.804 [2024-02-14 19:06:57.197442] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:19.804 [2024-02-14 19:06:57.197673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58902 ] 00:06:20.063 [2024-02-14 19:06:57.368432] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.322 [2024-02-14 19:06:57.533661] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.322 [2024-02-14 19:06:57.534097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.322 [2024-02-14 19:06:57.534229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.322 [2024-02-14 19:06:57.534245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.700 19:06:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.701 19:06:58 -- common/autotest_common.sh@850 -- # return 0 00:06:21.701 19:06:58 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58922 00:06:21.701 19:06:58 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58922 /var/tmp/spdk2.sock 00:06:21.701 19:06:58 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:21.701 19:06:58 -- common/autotest_common.sh@638 -- # local es=0 00:06:21.701 19:06:58 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 58922 /var/tmp/spdk2.sock 00:06:21.701 19:06:58 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:21.701 19:06:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:21.701 19:06:58 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:21.701 19:06:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:21.701 19:06:58 -- common/autotest_common.sh@641 -- # waitforlisten 58922 /var/tmp/spdk2.sock 00:06:21.701 19:06:58 -- common/autotest_common.sh@817 -- # '[' -z 58922 ']' 00:06:21.701 19:06:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.701 19:06:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:21.701 19:06:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.701 19:06:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:21.701 19:06:58 -- common/autotest_common.sh@10 -- # set +x 00:06:21.701 [2024-02-14 19:06:59.006981] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:21.701 [2024-02-14 19:06:59.007149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58922 ] 00:06:21.960 [2024-02-14 19:06:59.185211] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58902 has claimed it. 00:06:21.960 [2024-02-14 19:06:59.188668] app.c: 789:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.528 ERROR: process (pid: 58922) is no longer running 00:06:22.528 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (58922) - No such process 00:06:22.528 19:06:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:22.528 19:06:59 -- common/autotest_common.sh@850 -- # return 1 00:06:22.528 19:06:59 -- common/autotest_common.sh@641 -- # es=1 00:06:22.528 19:06:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:22.528 19:06:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:22.528 19:06:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:22.528 19:06:59 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:22.528 19:06:59 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.528 19:06:59 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.528 19:06:59 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.528 19:06:59 -- event/cpu_locks.sh@141 -- # killprocess 58902 00:06:22.528 19:06:59 -- common/autotest_common.sh@924 -- # '[' -z 58902 ']' 00:06:22.528 19:06:59 -- common/autotest_common.sh@928 -- # kill -0 58902 00:06:22.528 19:06:59 -- common/autotest_common.sh@929 -- # uname 00:06:22.528 19:06:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:22.528 19:06:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58902 00:06:22.528 19:06:59 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:22.528 killing process with pid 58902 00:06:22.528 19:06:59 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:22.528 19:06:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58902' 00:06:22.528 19:06:59 -- common/autotest_common.sh@943 -- # kill 58902 00:06:22.528 19:06:59 -- common/autotest_common.sh@948 -- # wait 58902 00:06:24.448 00:06:24.448 real 0m4.586s 00:06:24.448 user 0m12.639s 00:06:24.448 sys 0m0.588s 00:06:24.448 19:07:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.448 19:07:01 -- common/autotest_common.sh@10 -- # set +x 00:06:24.448 ************************************ 00:06:24.448 END TEST locking_overlapped_coremask 00:06:24.448 ************************************ 00:06:24.448 19:07:01 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:24.448 19:07:01 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:24.448 19:07:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:24.448 19:07:01 -- common/autotest_common.sh@10 -- # set +x 00:06:24.448 ************************************ 00:06:24.448 START TEST locking_overlapped_coremask_via_rpc 00:06:24.448 ************************************ 00:06:24.448 19:07:01 -- common/autotest_common.sh@1102 -- # locking_overlapped_coremask_via_rpc 00:06:24.448 19:07:01 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58986 00:06:24.449 19:07:01 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:24.449 19:07:01 -- event/cpu_locks.sh@149 -- # waitforlisten 58986 /var/tmp/spdk.sock 00:06:24.449 19:07:01 -- common/autotest_common.sh@817 -- # '[' -z 58986 ']' 00:06:24.449 19:07:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.449 19:07:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:24.449 19:07:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.449 19:07:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:24.449 19:07:01 -- common/autotest_common.sh@10 -- # set +x 00:06:24.449 [2024-02-14 19:07:01.831272] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:24.449 [2024-02-14 19:07:01.831436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58986 ] 00:06:24.707 [2024-02-14 19:07:01.989374] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.707 [2024-02-14 19:07:01.989442] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.966 [2024-02-14 19:07:02.164111] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.966 [2024-02-14 19:07:02.164542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.966 [2024-02-14 19:07:02.165033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.966 [2024-02-14 19:07:02.165060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.341 19:07:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:26.341 19:07:03 -- common/autotest_common.sh@850 -- # return 0 00:06:26.341 19:07:03 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59017 00:06:26.341 19:07:03 -- event/cpu_locks.sh@153 -- # waitforlisten 59017 /var/tmp/spdk2.sock 00:06:26.341 19:07:03 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:26.341 19:07:03 -- common/autotest_common.sh@817 -- # '[' -z 59017 ']' 00:06:26.341 19:07:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.341 19:07:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.341 19:07:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.341 19:07:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.341 19:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:26.341 [2024-02-14 19:07:03.559006] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:26.341 [2024-02-14 19:07:03.559138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59017 ] 00:06:26.341 [2024-02-14 19:07:03.729092] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.341 [2024-02-14 19:07:03.731542] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.908 [2024-02-14 19:07:04.093562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.908 [2024-02-14 19:07:04.096851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.908 [2024-02-14 19:07:04.096982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:26.908 [2024-02-14 19:07:04.097056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.815 19:07:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:28.815 19:07:05 -- common/autotest_common.sh@850 -- # return 0 00:06:28.815 19:07:05 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:28.815 19:07:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.815 19:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.815 19:07:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.815 19:07:05 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.815 19:07:05 -- common/autotest_common.sh@638 -- # local es=0 00:06:28.815 19:07:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.815 19:07:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:28.815 19:07:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:28.815 19:07:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:28.815 19:07:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:28.815 19:07:06 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.815 19:07:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.815 19:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:28.815 [2024-02-14 19:07:06.007825] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58986 has claimed it. 00:06:28.815 request: 00:06:28.815 { 00:06:28.815 "method": "framework_enable_cpumask_locks", 00:06:28.815 "req_id": 1 00:06:28.815 } 00:06:28.815 Got JSON-RPC error response 00:06:28.815 response: 00:06:28.815 { 00:06:28.815 "code": -32603, 00:06:28.815 "message": "Failed to claim CPU core: 2" 00:06:28.815 } 00:06:28.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.815 19:07:06 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:28.815 19:07:06 -- common/autotest_common.sh@641 -- # es=1 00:06:28.815 19:07:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:28.815 19:07:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:28.815 19:07:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:28.815 19:07:06 -- event/cpu_locks.sh@158 -- # waitforlisten 58986 /var/tmp/spdk.sock 00:06:28.815 19:07:06 -- common/autotest_common.sh@817 -- # '[' -z 58986 ']' 00:06:28.815 19:07:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.815 19:07:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:28.815 19:07:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.815 19:07:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:28.815 19:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.074 19:07:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:29.074 19:07:06 -- common/autotest_common.sh@850 -- # return 0 00:06:29.074 19:07:06 -- event/cpu_locks.sh@159 -- # waitforlisten 59017 /var/tmp/spdk2.sock 00:06:29.074 19:07:06 -- common/autotest_common.sh@817 -- # '[' -z 59017 ']' 00:06:29.074 19:07:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.074 19:07:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:29.074 19:07:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.074 19:07:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:29.074 19:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.333 19:07:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:29.333 19:07:06 -- common/autotest_common.sh@850 -- # return 0 00:06:29.333 19:07:06 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:29.333 19:07:06 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.333 19:07:06 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.333 19:07:06 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.333 00:06:29.333 real 0m4.881s 00:06:29.333 user 0m2.035s 00:06:29.333 sys 0m0.259s 00:06:29.333 19:07:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.333 19:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.333 ************************************ 00:06:29.333 END TEST locking_overlapped_coremask_via_rpc 00:06:29.333 ************************************ 00:06:29.333 19:07:06 -- event/cpu_locks.sh@174 -- # cleanup 00:06:29.333 19:07:06 -- event/cpu_locks.sh@15 -- # [[ -z 58986 ]] 00:06:29.333 19:07:06 -- event/cpu_locks.sh@15 -- # killprocess 58986 00:06:29.333 19:07:06 -- common/autotest_common.sh@924 -- # '[' -z 58986 ']' 00:06:29.333 19:07:06 -- common/autotest_common.sh@928 -- # kill -0 58986 00:06:29.333 19:07:06 -- common/autotest_common.sh@929 -- # uname 00:06:29.333 19:07:06 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:29.333 19:07:06 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58986 00:06:29.333 killing process with pid 58986 00:06:29.333 19:07:06 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:29.333 19:07:06 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:29.333 19:07:06 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58986' 00:06:29.333 19:07:06 -- common/autotest_common.sh@943 -- # kill 58986 00:06:29.333 19:07:06 -- common/autotest_common.sh@948 -- # wait 58986 00:06:31.864 19:07:08 -- event/cpu_locks.sh@16 -- # [[ -z 59017 ]] 00:06:31.864 19:07:08 -- event/cpu_locks.sh@16 -- # killprocess 59017 00:06:31.864 19:07:08 -- common/autotest_common.sh@924 -- # '[' -z 59017 ']' 00:06:31.864 19:07:08 -- common/autotest_common.sh@928 -- # kill -0 59017 00:06:31.865 19:07:08 -- common/autotest_common.sh@929 -- # uname 00:06:31.865 19:07:08 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:31.865 19:07:08 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 59017 00:06:31.865 killing process with pid 59017 00:06:31.865 19:07:08 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:06:31.865 19:07:08 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:06:31.865 19:07:08 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 59017' 00:06:31.865 19:07:08 -- common/autotest_common.sh@943 -- # kill 59017 00:06:31.865 19:07:08 -- common/autotest_common.sh@948 -- # wait 59017 00:06:33.767 19:07:10 -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.767 19:07:10 -- event/cpu_locks.sh@1 -- # cleanup 00:06:33.767 19:07:10 -- event/cpu_locks.sh@15 -- # [[ -z 58986 ]] 00:06:33.767 19:07:10 -- event/cpu_locks.sh@15 -- # killprocess 58986 00:06:33.767 19:07:10 -- common/autotest_common.sh@924 -- # '[' -z 58986 ']' 00:06:33.767 Process with pid 58986 is not found 00:06:33.767 19:07:10 -- common/autotest_common.sh@928 -- # kill -0 58986 00:06:33.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (58986) - No such process 00:06:33.767 19:07:10 -- common/autotest_common.sh@951 -- # echo 'Process with pid 58986 is not found' 00:06:33.767 19:07:10 -- event/cpu_locks.sh@16 -- # [[ -z 59017 ]] 00:06:33.767 19:07:10 -- event/cpu_locks.sh@16 -- # killprocess 59017 00:06:33.767 19:07:10 -- common/autotest_common.sh@924 -- # '[' -z 59017 ']' 00:06:33.767 19:07:10 -- common/autotest_common.sh@928 -- # kill -0 59017 00:06:33.768 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (59017) - No such process 00:06:33.768 Process with pid 59017 is not found 00:06:33.768 19:07:10 -- common/autotest_common.sh@951 -- # echo 'Process with pid 59017 is not found' 00:06:33.768 19:07:10 -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.768 00:06:33.768 real 0m47.666s 00:06:33.768 user 1m25.228s 00:06:33.768 sys 0m5.845s 00:06:33.768 ************************************ 00:06:33.768 END TEST cpu_locks 00:06:33.768 ************************************ 00:06:33.768 19:07:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.768 19:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:33.768 ************************************ 00:06:33.768 END TEST event 00:06:33.768 ************************************ 00:06:33.768 00:06:33.768 real 1m17.440s 00:06:33.768 user 2m22.707s 00:06:33.768 sys 0m9.334s 00:06:33.768 19:07:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.768 19:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:33.768 19:07:10 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:33.768 19:07:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:33.768 19:07:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:33.768 19:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:33.768 ************************************ 00:06:33.768 START TEST thread 00:06:33.768 ************************************ 00:06:33.768 19:07:10 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:33.768 * Looking for test storage... 00:06:33.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:33.768 19:07:10 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.768 19:07:10 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:06:33.768 19:07:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:33.768 19:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:33.768 ************************************ 00:06:33.768 START TEST thread_poller_perf 00:06:33.768 ************************************ 00:06:33.768 19:07:10 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.768 [2024-02-14 19:07:10.973870] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:33.768 [2024-02-14 19:07:10.974008] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59194 ] 00:06:33.768 [2024-02-14 19:07:11.134673] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.026 [2024-02-14 19:07:11.317020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.026 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:35.403 ====================================== 00:06:35.403 busy:2212438419 (cyc) 00:06:35.403 total_run_count: 287000 00:06:35.403 tsc_hz: 2200000000 (cyc) 00:06:35.403 ====================================== 00:06:35.403 poller_cost: 7708 (cyc), 3503 (nsec) 00:06:35.403 00:06:35.403 real 0m1.727s 00:06:35.403 user 0m1.531s 00:06:35.403 sys 0m0.085s 00:06:35.403 19:07:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.403 19:07:12 -- common/autotest_common.sh@10 -- # set +x 00:06:35.403 ************************************ 00:06:35.403 END TEST thread_poller_perf 00:06:35.403 ************************************ 00:06:35.403 19:07:12 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:35.403 19:07:12 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:06:35.403 19:07:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:35.403 19:07:12 -- common/autotest_common.sh@10 -- # set +x 00:06:35.403 ************************************ 00:06:35.403 START TEST thread_poller_perf 00:06:35.403 ************************************ 00:06:35.403 19:07:12 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:35.403 [2024-02-14 19:07:12.763204] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:35.403 [2024-02-14 19:07:12.763417] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59231 ] 00:06:35.662 [2024-02-14 19:07:12.935624] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.921 [2024-02-14 19:07:13.098802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.921 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:37.298 ====================================== 00:06:37.298 busy:2205242562 (cyc) 00:06:37.298 total_run_count: 4030000 00:06:37.298 tsc_hz: 2200000000 (cyc) 00:06:37.298 ====================================== 00:06:37.298 poller_cost: 547 (cyc), 248 (nsec) 00:06:37.298 00:06:37.298 real 0m1.741s 00:06:37.298 user 0m1.541s 00:06:37.298 sys 0m0.089s 00:06:37.298 19:07:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.298 19:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:37.298 ************************************ 00:06:37.298 END TEST thread_poller_perf 00:06:37.298 ************************************ 00:06:37.298 19:07:14 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:37.298 00:06:37.298 real 0m3.658s 00:06:37.298 user 0m3.141s 00:06:37.298 sys 0m0.285s 00:06:37.298 19:07:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.298 19:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:37.298 ************************************ 00:06:37.298 END TEST thread 00:06:37.298 ************************************ 00:06:37.298 19:07:14 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:37.298 19:07:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:37.298 19:07:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:37.298 19:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:37.298 ************************************ 00:06:37.298 START TEST accel 00:06:37.298 ************************************ 00:06:37.298 19:07:14 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:37.298 * Looking for test storage... 00:06:37.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:37.298 19:07:14 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:37.298 19:07:14 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:37.298 19:07:14 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.298 19:07:14 -- accel/accel.sh@59 -- # spdk_tgt_pid=59314 00:06:37.298 19:07:14 -- accel/accel.sh@60 -- # waitforlisten 59314 00:06:37.298 19:07:14 -- common/autotest_common.sh@817 -- # '[' -z 59314 ']' 00:06:37.298 19:07:14 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:37.298 19:07:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.298 19:07:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:37.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.298 19:07:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.298 19:07:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:37.298 19:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:37.298 19:07:14 -- accel/accel.sh@58 -- # build_accel_config 00:06:37.298 19:07:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.298 19:07:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.298 19:07:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.298 19:07:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.298 19:07:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.298 19:07:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.298 19:07:14 -- accel/accel.sh@42 -- # jq -r . 00:06:37.557 [2024-02-14 19:07:14.750582] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:37.557 [2024-02-14 19:07:14.750720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59314 ] 00:06:37.557 [2024-02-14 19:07:14.907909] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.815 [2024-02-14 19:07:15.079041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.815 [2024-02-14 19:07:15.079256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.815 [2024-02-14 19:07:15.079304] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:39.193 19:07:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:39.193 19:07:16 -- common/autotest_common.sh@850 -- # return 0 00:06:39.193 19:07:16 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:39.193 19:07:16 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:39.193 19:07:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.193 19:07:16 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:39.193 19:07:16 -- common/autotest_common.sh@10 -- # set +x 00:06:39.193 19:07:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # IFS== 00:06:39.193 19:07:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:39.193 19:07:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:39.193 19:07:16 -- accel/accel.sh@67 -- # killprocess 59314 00:06:39.193 19:07:16 -- common/autotest_common.sh@924 -- # '[' -z 59314 ']' 00:06:39.193 19:07:16 -- common/autotest_common.sh@928 -- # kill -0 59314 00:06:39.193 19:07:16 -- common/autotest_common.sh@929 -- # uname 00:06:39.193 19:07:16 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:39.193 19:07:16 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 59314 00:06:39.193 19:07:16 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:39.193 killing process with pid 59314 00:06:39.193 19:07:16 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:39.193 19:07:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 59314' 00:06:39.193 19:07:16 -- common/autotest_common.sh@943 -- # kill 59314 00:06:39.193 [2024-02-14 19:07:16.486324] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:39.193 19:07:16 -- common/autotest_common.sh@948 -- # wait 59314 00:06:41.096 19:07:18 -- accel/accel.sh@68 -- # trap - ERR 00:06:41.096 19:07:18 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:41.096 19:07:18 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:06:41.096 19:07:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:41.096 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:41.096 19:07:18 -- common/autotest_common.sh@1102 -- # accel_perf -h 00:06:41.096 19:07:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:41.096 19:07:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.096 19:07:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.096 19:07:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.096 19:07:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.096 19:07:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.096 19:07:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.096 19:07:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.096 19:07:18 -- accel/accel.sh@42 -- # jq -r . 00:06:41.356 19:07:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.356 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:41.356 19:07:18 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:41.356 19:07:18 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:06:41.356 19:07:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:41.356 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:41.356 ************************************ 00:06:41.356 START TEST accel_missing_filename 00:06:41.356 ************************************ 00:06:41.356 19:07:18 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w compress 00:06:41.356 19:07:18 -- common/autotest_common.sh@638 -- # local es=0 00:06:41.356 19:07:18 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:41.356 19:07:18 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:41.356 19:07:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:41.356 19:07:18 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:41.356 19:07:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:41.356 19:07:18 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:41.356 19:07:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:41.356 19:07:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.356 19:07:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.356 19:07:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.356 19:07:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.356 19:07:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.356 19:07:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.356 19:07:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.356 19:07:18 -- accel/accel.sh@42 -- # jq -r . 00:06:41.356 [2024-02-14 19:07:18.630372] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:41.356 [2024-02-14 19:07:18.630538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59397 ] 00:06:41.615 [2024-02-14 19:07:18.789725] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.615 [2024-02-14 19:07:18.961001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.615 [2024-02-14 19:07:18.961168] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:41.875 [2024-02-14 19:07:19.136886] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.875 [2024-02-14 19:07:19.137022] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:42.443 [2024-02-14 19:07:19.575300] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:42.701 A filename is required. 00:06:42.701 19:07:19 -- common/autotest_common.sh@641 -- # es=234 00:06:42.701 19:07:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:42.701 19:07:19 -- common/autotest_common.sh@650 -- # es=106 00:06:42.701 19:07:19 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:42.701 19:07:19 -- common/autotest_common.sh@658 -- # es=1 00:06:42.701 19:07:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:42.701 00:06:42.701 real 0m1.340s 00:06:42.701 user 0m1.141s 00:06:42.701 sys 0m0.140s 00:06:42.701 19:07:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.701 19:07:19 -- common/autotest_common.sh@10 -- # set +x 00:06:42.701 ************************************ 00:06:42.701 END TEST accel_missing_filename 00:06:42.701 ************************************ 00:06:42.702 19:07:19 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.702 19:07:19 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:06:42.702 19:07:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:42.702 19:07:19 -- common/autotest_common.sh@10 -- # set +x 00:06:42.702 ************************************ 00:06:42.702 START TEST accel_compress_verify 00:06:42.702 ************************************ 00:06:42.702 19:07:19 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.702 19:07:19 -- common/autotest_common.sh@638 -- # local es=0 00:06:42.702 19:07:19 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.702 19:07:19 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:42.702 19:07:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:42.702 19:07:19 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:42.702 19:07:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:42.702 19:07:19 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.702 19:07:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.702 19:07:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.702 19:07:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.702 19:07:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.702 19:07:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.702 19:07:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.702 19:07:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.702 19:07:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.702 19:07:19 -- accel/accel.sh@42 -- # jq -r . 00:06:42.702 [2024-02-14 19:07:20.032119] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:42.702 [2024-02-14 19:07:20.032306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59428 ] 00:06:42.960 [2024-02-14 19:07:20.193489] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.960 [2024-02-14 19:07:20.372726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.960 [2024-02-14 19:07:20.372857] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:43.219 [2024-02-14 19:07:20.545854] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.219 [2024-02-14 19:07:20.545963] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:43.787 [2024-02-14 19:07:20.976716] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:44.046 00:06:44.046 Compression does not support the verify option, aborting. 00:06:44.046 19:07:21 -- common/autotest_common.sh@641 -- # es=161 00:06:44.046 19:07:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:44.046 19:07:21 -- common/autotest_common.sh@650 -- # es=33 00:06:44.046 19:07:21 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:44.046 19:07:21 -- common/autotest_common.sh@658 -- # es=1 00:06:44.046 19:07:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:44.046 00:06:44.046 real 0m1.336s 00:06:44.046 user 0m1.127s 00:06:44.046 sys 0m0.146s 00:06:44.046 19:07:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.046 ************************************ 00:06:44.046 END TEST accel_compress_verify 00:06:44.046 ************************************ 00:06:44.046 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.046 19:07:21 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:44.046 19:07:21 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:06:44.046 19:07:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:44.046 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.046 ************************************ 00:06:44.046 START TEST accel_wrong_workload 00:06:44.046 ************************************ 00:06:44.046 19:07:21 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w foobar 00:06:44.046 19:07:21 -- common/autotest_common.sh@638 -- # local es=0 00:06:44.046 19:07:21 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:44.046 19:07:21 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:44.046 19:07:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:44.046 19:07:21 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:44.046 19:07:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:44.046 19:07:21 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:44.046 19:07:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:44.046 19:07:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.046 19:07:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.046 19:07:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.046 19:07:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.046 19:07:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.046 19:07:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.046 19:07:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.046 19:07:21 -- accel/accel.sh@42 -- # jq -r . 00:06:44.046 Unsupported workload type: foobar 00:06:44.046 [2024-02-14 19:07:21.420012] app.c:1290:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:44.046 accel_perf options: 00:06:44.046 [-h help message] 00:06:44.046 [-q queue depth per core] 00:06:44.046 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:44.046 [-T number of threads per core 00:06:44.046 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:44.046 [-t time in seconds] 00:06:44.046 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:44.046 [ dif_verify, , dif_generate, dif_generate_copy 00:06:44.046 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:44.046 [-l for compress/decompress workloads, name of uncompressed input file 00:06:44.046 [-S for crc32c workload, use this seed value (default 0) 00:06:44.046 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:44.046 [-f for fill workload, use this BYTE value (default 255) 00:06:44.046 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:44.046 [-y verify result if this switch is on] 00:06:44.046 [-a tasks to allocate per core (default: same value as -q)] 00:06:44.046 Can be used to spread operations across a wider range of memory. 00:06:44.046 19:07:21 -- common/autotest_common.sh@641 -- # es=1 00:06:44.046 19:07:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:44.046 19:07:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:44.046 19:07:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:44.046 00:06:44.046 real 0m0.069s 00:06:44.046 user 0m0.081s 00:06:44.046 sys 0m0.044s 00:06:44.046 19:07:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.046 ************************************ 00:06:44.046 END TEST accel_wrong_workload 00:06:44.046 ************************************ 00:06:44.046 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.306 19:07:21 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:44.306 19:07:21 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:06:44.306 19:07:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:44.306 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.306 ************************************ 00:06:44.306 START TEST accel_negative_buffers 00:06:44.306 ************************************ 00:06:44.306 19:07:21 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:44.306 19:07:21 -- common/autotest_common.sh@638 -- # local es=0 00:06:44.306 19:07:21 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:44.306 19:07:21 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:44.306 19:07:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:44.306 19:07:21 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:44.306 19:07:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:44.306 19:07:21 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:44.306 19:07:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:44.306 19:07:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.306 19:07:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.306 19:07:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.306 19:07:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.306 19:07:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.306 19:07:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.306 19:07:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.306 19:07:21 -- accel/accel.sh@42 -- # jq -r . 00:06:44.306 -x option must be non-negative. 00:06:44.306 [2024-02-14 19:07:21.542542] app.c:1290:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:44.306 accel_perf options: 00:06:44.306 [-h help message] 00:06:44.306 [-q queue depth per core] 00:06:44.306 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:44.306 [-T number of threads per core 00:06:44.306 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:44.306 [-t time in seconds] 00:06:44.306 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:44.306 [ dif_verify, , dif_generate, dif_generate_copy 00:06:44.306 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:44.306 [-l for compress/decompress workloads, name of uncompressed input file 00:06:44.306 [-S for crc32c workload, use this seed value (default 0) 00:06:44.306 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:44.306 [-f for fill workload, use this BYTE value (default 255) 00:06:44.306 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:44.306 [-y verify result if this switch is on] 00:06:44.306 [-a tasks to allocate per core (default: same value as -q)] 00:06:44.306 Can be used to spread operations across a wider range of memory. 00:06:44.306 19:07:21 -- common/autotest_common.sh@641 -- # es=1 00:06:44.306 19:07:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:44.306 19:07:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:44.306 ************************************ 00:06:44.306 END TEST accel_negative_buffers 00:06:44.306 ************************************ 00:06:44.306 19:07:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:44.306 00:06:44.306 real 0m0.070s 00:06:44.306 user 0m0.083s 00:06:44.306 sys 0m0.039s 00:06:44.306 19:07:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.306 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.306 19:07:21 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:44.306 19:07:21 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:06:44.306 19:07:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:44.306 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.306 ************************************ 00:06:44.306 START TEST accel_crc32c 00:06:44.306 ************************************ 00:06:44.306 19:07:21 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:44.306 19:07:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.306 19:07:21 -- accel/accel.sh@17 -- # local accel_module 00:06:44.306 19:07:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:44.306 19:07:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:44.306 19:07:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.306 19:07:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.306 19:07:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.306 19:07:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.306 19:07:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.306 19:07:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.306 19:07:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.306 19:07:21 -- accel/accel.sh@42 -- # jq -r . 00:06:44.306 [2024-02-14 19:07:21.662095] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:44.306 [2024-02-14 19:07:21.662242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59506 ] 00:06:44.565 [2024-02-14 19:07:21.820407] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.825 [2024-02-14 19:07:22.003136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.825 [2024-02-14 19:07:22.003267] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:46.202 [2024-02-14 19:07:23.206223] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:46.770 19:07:23 -- accel/accel.sh@18 -- # out=' 00:06:46.770 SPDK Configuration: 00:06:46.770 Core mask: 0x1 00:06:46.770 00:06:46.770 Accel Perf Configuration: 00:06:46.770 Workload Type: crc32c 00:06:46.770 CRC-32C seed: 32 00:06:46.770 Transfer size: 4096 bytes 00:06:46.770 Vector count 1 00:06:46.770 Module: software 00:06:46.770 Queue depth: 32 00:06:46.770 Allocate depth: 32 00:06:46.770 # threads/core: 1 00:06:46.770 Run time: 1 seconds 00:06:46.770 Verify: Yes 00:06:46.770 00:06:46.770 Running for 1 seconds... 00:06:46.770 00:06:46.770 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.770 ------------------------------------------------------------------------------------ 00:06:46.770 0,0 433248/s 1692 MiB/s 0 0 00:06:46.770 ==================================================================================== 00:06:46.770 Total 433248/s 1692 MiB/s 0 0' 00:06:46.770 19:07:23 -- accel/accel.sh@20 -- # IFS=: 00:06:46.770 19:07:23 -- accel/accel.sh@20 -- # read -r var val 00:06:46.770 19:07:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:46.770 19:07:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:46.770 19:07:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.770 19:07:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.770 19:07:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.770 19:07:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.770 19:07:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.770 19:07:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.770 19:07:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.770 19:07:23 -- accel/accel.sh@42 -- # jq -r . 00:06:46.770 [2024-02-14 19:07:23.993783] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:46.770 [2024-02-14 19:07:23.993966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59532 ] 00:06:46.770 [2024-02-14 19:07:24.165165] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.028 [2024-02-14 19:07:24.347197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.028 [2024-02-14 19:07:24.347549] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val= 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val= 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val=0x1 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val= 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val= 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val=crc32c 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val=32 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val= 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val=software 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val=32 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val=32 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val=1 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val=Yes 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val= 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:47.287 19:07:24 -- accel/accel.sh@21 -- # val= 00:06:47.287 19:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:47.287 19:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:48.225 [2024-02-14 19:07:25.519950] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:49.161 19:07:26 -- accel/accel.sh@21 -- # val= 00:06:49.161 19:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.161 19:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.161 19:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.162 19:07:26 -- accel/accel.sh@21 -- # val= 00:06:49.162 19:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.162 19:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.162 19:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.162 19:07:26 -- accel/accel.sh@21 -- # val= 00:06:49.162 19:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.162 19:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.162 19:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.162 19:07:26 -- accel/accel.sh@21 -- # val= 00:06:49.162 19:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.162 19:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.162 19:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.162 19:07:26 -- accel/accel.sh@21 -- # val= 00:06:49.162 19:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.162 19:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.162 19:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.162 19:07:26 -- accel/accel.sh@21 -- # val= 00:06:49.162 19:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.162 19:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.162 19:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.162 ************************************ 00:06:49.162 END TEST accel_crc32c 00:06:49.162 ************************************ 00:06:49.162 19:07:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.162 19:07:26 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:49.162 19:07:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.162 00:06:49.162 real 0m4.629s 00:06:49.162 user 0m4.134s 00:06:49.162 sys 0m0.285s 00:06:49.162 19:07:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.162 19:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:49.162 19:07:26 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:49.162 19:07:26 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:06:49.162 19:07:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:49.162 19:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:49.162 ************************************ 00:06:49.162 START TEST accel_crc32c_C2 00:06:49.162 ************************************ 00:06:49.162 19:07:26 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:49.162 19:07:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.162 19:07:26 -- accel/accel.sh@17 -- # local accel_module 00:06:49.162 19:07:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:49.162 19:07:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:49.162 19:07:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.162 19:07:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.162 19:07:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.162 19:07:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.162 19:07:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.162 19:07:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.162 19:07:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.162 19:07:26 -- accel/accel.sh@42 -- # jq -r . 00:06:49.162 [2024-02-14 19:07:26.349005] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:49.162 [2024-02-14 19:07:26.349190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59573 ] 00:06:49.162 [2024-02-14 19:07:26.518764] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.421 [2024-02-14 19:07:26.688237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.421 [2024-02-14 19:07:26.688343] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:50.798 [2024-02-14 19:07:27.849715] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:51.371 19:07:28 -- accel/accel.sh@18 -- # out=' 00:06:51.371 SPDK Configuration: 00:06:51.371 Core mask: 0x1 00:06:51.371 00:06:51.371 Accel Perf Configuration: 00:06:51.371 Workload Type: crc32c 00:06:51.371 CRC-32C seed: 0 00:06:51.371 Transfer size: 4096 bytes 00:06:51.371 Vector count 2 00:06:51.371 Module: software 00:06:51.371 Queue depth: 32 00:06:51.371 Allocate depth: 32 00:06:51.371 # threads/core: 1 00:06:51.371 Run time: 1 seconds 00:06:51.371 Verify: Yes 00:06:51.371 00:06:51.371 Running for 1 seconds... 00:06:51.371 00:06:51.371 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.371 ------------------------------------------------------------------------------------ 00:06:51.371 0,0 324928/s 2538 MiB/s 0 0 00:06:51.371 ==================================================================================== 00:06:51.371 Total 324928/s 1269 MiB/s 0 0' 00:06:51.371 19:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:51.371 19:07:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:51.371 19:07:28 -- accel/accel.sh@20 -- # read -r var val 00:06:51.371 19:07:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:51.371 19:07:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.371 19:07:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.371 19:07:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.371 19:07:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.371 19:07:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.371 19:07:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.371 19:07:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.371 19:07:28 -- accel/accel.sh@42 -- # jq -r . 00:06:51.371 [2024-02-14 19:07:28.659682] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:51.371 [2024-02-14 19:07:28.659848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59605 ] 00:06:51.630 [2024-02-14 19:07:28.835923] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.630 [2024-02-14 19:07:29.032778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.630 [2024-02-14 19:07:29.032866] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val= 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val= 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val=0x1 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val= 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val= 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val=crc32c 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val=0 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val= 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val=software 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val=32 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val=32 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val=1 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val=Yes 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val= 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:51.888 19:07:29 -- accel/accel.sh@21 -- # val= 00:06:51.888 19:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:51.888 19:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.824 [2024-02-14 19:07:30.198337] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:53.759 19:07:30 -- accel/accel.sh@21 -- # val= 00:06:53.759 19:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.759 19:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:53.759 19:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.759 19:07:30 -- accel/accel.sh@21 -- # val= 00:06:53.759 19:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.759 19:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:53.759 19:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.759 19:07:30 -- accel/accel.sh@21 -- # val= 00:06:53.759 19:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.759 19:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:53.759 19:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.759 19:07:30 -- accel/accel.sh@21 -- # val= 00:06:53.759 19:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.759 19:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:53.759 19:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.760 19:07:30 -- accel/accel.sh@21 -- # val= 00:06:53.760 19:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.760 19:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:53.760 19:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.760 19:07:30 -- accel/accel.sh@21 -- # val= 00:06:53.760 19:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.760 19:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:53.760 19:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.760 19:07:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.760 19:07:30 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:53.760 19:07:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.760 00:06:53.760 real 0m4.679s 00:06:53.760 user 0m4.178s 00:06:53.760 sys 0m0.295s 00:06:53.760 19:07:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.760 ************************************ 00:06:53.760 END TEST accel_crc32c_C2 00:06:53.760 ************************************ 00:06:53.760 19:07:30 -- common/autotest_common.sh@10 -- # set +x 00:06:53.760 19:07:31 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:53.760 19:07:31 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:06:53.760 19:07:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:53.760 19:07:31 -- common/autotest_common.sh@10 -- # set +x 00:06:53.760 ************************************ 00:06:53.760 START TEST accel_copy 00:06:53.760 ************************************ 00:06:53.760 19:07:31 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy -y 00:06:53.760 19:07:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.760 19:07:31 -- accel/accel.sh@17 -- # local accel_module 00:06:53.760 19:07:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:53.760 19:07:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:53.760 19:07:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.760 19:07:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.760 19:07:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.760 19:07:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.760 19:07:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.760 19:07:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.760 19:07:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.760 19:07:31 -- accel/accel.sh@42 -- # jq -r . 00:06:53.760 [2024-02-14 19:07:31.074776] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:53.760 [2024-02-14 19:07:31.074894] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59651 ] 00:06:54.018 [2024-02-14 19:07:31.235175] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.018 [2024-02-14 19:07:31.420886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.018 [2024-02-14 19:07:31.420977] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:55.396 [2024-02-14 19:07:32.601918] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:55.963 19:07:33 -- accel/accel.sh@18 -- # out=' 00:06:55.963 SPDK Configuration: 00:06:55.963 Core mask: 0x1 00:06:55.963 00:06:55.963 Accel Perf Configuration: 00:06:55.963 Workload Type: copy 00:06:55.963 Transfer size: 4096 bytes 00:06:55.963 Vector count 1 00:06:55.963 Module: software 00:06:55.963 Queue depth: 32 00:06:55.963 Allocate depth: 32 00:06:55.963 # threads/core: 1 00:06:55.963 Run time: 1 seconds 00:06:55.963 Verify: Yes 00:06:55.963 00:06:55.963 Running for 1 seconds... 00:06:55.963 00:06:55.963 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.963 ------------------------------------------------------------------------------------ 00:06:55.963 0,0 260736/s 1018 MiB/s 0 0 00:06:55.963 ==================================================================================== 00:06:55.963 Total 260736/s 1018 MiB/s 0 0' 00:06:55.963 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:55.963 19:07:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:55.963 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:55.964 19:07:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:55.964 19:07:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.964 19:07:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.964 19:07:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.964 19:07:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.964 19:07:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.964 19:07:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.964 19:07:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.964 19:07:33 -- accel/accel.sh@42 -- # jq -r . 00:06:56.223 [2024-02-14 19:07:33.381462] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:56.223 [2024-02-14 19:07:33.381677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59677 ] 00:06:56.223 [2024-02-14 19:07:33.552549] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.482 [2024-02-14 19:07:33.717280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.482 [2024-02-14 19:07:33.717383] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val= 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val= 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val=0x1 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val= 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val= 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val=copy 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val= 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val=software 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val=32 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val=32 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val=1 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val=Yes 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val= 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.482 19:07:33 -- accel/accel.sh@21 -- # val= 00:06:56.482 19:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.482 19:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:57.861 [2024-02-14 19:07:34.877730] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:58.429 19:07:35 -- accel/accel.sh@21 -- # val= 00:06:58.429 19:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.429 19:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.429 19:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.429 19:07:35 -- accel/accel.sh@21 -- # val= 00:06:58.429 19:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.429 19:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.429 19:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.429 19:07:35 -- accel/accel.sh@21 -- # val= 00:06:58.429 19:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.429 19:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.429 19:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.429 19:07:35 -- accel/accel.sh@21 -- # val= 00:06:58.429 19:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.429 19:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.429 19:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.429 19:07:35 -- accel/accel.sh@21 -- # val= 00:06:58.429 19:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.429 19:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.429 19:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.429 19:07:35 -- accel/accel.sh@21 -- # val= 00:06:58.429 19:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.429 19:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.429 19:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.429 19:07:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.429 19:07:35 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:58.429 19:07:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.429 00:06:58.429 real 0m4.564s 00:06:58.429 user 0m4.072s 00:06:58.429 sys 0m0.285s 00:06:58.429 19:07:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.429 ************************************ 00:06:58.429 END TEST accel_copy 00:06:58.429 ************************************ 00:06:58.429 19:07:35 -- common/autotest_common.sh@10 -- # set +x 00:06:58.429 19:07:35 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.429 19:07:35 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:06:58.429 19:07:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:58.429 19:07:35 -- common/autotest_common.sh@10 -- # set +x 00:06:58.429 ************************************ 00:06:58.429 START TEST accel_fill 00:06:58.429 ************************************ 00:06:58.429 19:07:35 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.429 19:07:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.429 19:07:35 -- accel/accel.sh@17 -- # local accel_module 00:06:58.429 19:07:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.429 19:07:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.429 19:07:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.429 19:07:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.429 19:07:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.429 19:07:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.429 19:07:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.429 19:07:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.429 19:07:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.429 19:07:35 -- accel/accel.sh@42 -- # jq -r . 00:06:58.429 [2024-02-14 19:07:35.703555] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:58.429 [2024-02-14 19:07:35.703721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59718 ] 00:06:58.688 [2024-02-14 19:07:35.872580] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.688 [2024-02-14 19:07:36.031487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.688 [2024-02-14 19:07:36.031637] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:00.077 [2024-02-14 19:07:37.203960] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:00.653 19:07:37 -- accel/accel.sh@18 -- # out=' 00:07:00.653 SPDK Configuration: 00:07:00.653 Core mask: 0x1 00:07:00.653 00:07:00.653 Accel Perf Configuration: 00:07:00.653 Workload Type: fill 00:07:00.653 Fill pattern: 0x80 00:07:00.653 Transfer size: 4096 bytes 00:07:00.653 Vector count 1 00:07:00.653 Module: software 00:07:00.653 Queue depth: 64 00:07:00.653 Allocate depth: 64 00:07:00.653 # threads/core: 1 00:07:00.653 Run time: 1 seconds 00:07:00.653 Verify: Yes 00:07:00.653 00:07:00.653 Running for 1 seconds... 00:07:00.653 00:07:00.653 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.653 ------------------------------------------------------------------------------------ 00:07:00.653 0,0 418944/s 1636 MiB/s 0 0 00:07:00.653 ==================================================================================== 00:07:00.653 Total 418944/s 1636 MiB/s 0 0' 00:07:00.653 19:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.653 19:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.653 19:07:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.653 19:07:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.653 19:07:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.653 19:07:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.653 19:07:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.653 19:07:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.653 19:07:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.653 19:07:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.653 19:07:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.653 19:07:37 -- accel/accel.sh@42 -- # jq -r . 00:07:00.653 [2024-02-14 19:07:37.996352] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:00.653 [2024-02-14 19:07:37.996546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59750 ] 00:07:00.912 [2024-02-14 19:07:38.171663] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.171 [2024-02-14 19:07:38.384891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.171 [2024-02-14 19:07:38.385012] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:01.171 19:07:38 -- accel/accel.sh@21 -- # val= 00:07:01.171 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.171 19:07:38 -- accel/accel.sh@21 -- # val= 00:07:01.171 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.171 19:07:38 -- accel/accel.sh@21 -- # val=0x1 00:07:01.171 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.171 19:07:38 -- accel/accel.sh@21 -- # val= 00:07:01.171 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.171 19:07:38 -- accel/accel.sh@21 -- # val= 00:07:01.171 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.171 19:07:38 -- accel/accel.sh@21 -- # val=fill 00:07:01.171 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.171 19:07:38 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.171 19:07:38 -- accel/accel.sh@21 -- # val=0x80 00:07:01.171 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.171 19:07:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.171 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.171 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.171 19:07:38 -- accel/accel.sh@21 -- # val= 00:07:01.171 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.172 19:07:38 -- accel/accel.sh@21 -- # val=software 00:07:01.172 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.172 19:07:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.172 19:07:38 -- accel/accel.sh@21 -- # val=64 00:07:01.172 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.172 19:07:38 -- accel/accel.sh@21 -- # val=64 00:07:01.172 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.172 19:07:38 -- accel/accel.sh@21 -- # val=1 00:07:01.172 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.172 19:07:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.172 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.172 19:07:38 -- accel/accel.sh@21 -- # val=Yes 00:07:01.172 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.172 19:07:38 -- accel/accel.sh@21 -- # val= 00:07:01.172 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:01.172 19:07:38 -- accel/accel.sh@21 -- # val= 00:07:01.172 19:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:01.172 19:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:02.549 [2024-02-14 19:07:39.541799] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:03.124 19:07:40 -- accel/accel.sh@21 -- # val= 00:07:03.124 19:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.124 19:07:40 -- accel/accel.sh@20 -- # IFS=: 00:07:03.124 19:07:40 -- accel/accel.sh@20 -- # read -r var val 00:07:03.124 19:07:40 -- accel/accel.sh@21 -- # val= 00:07:03.124 19:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.124 19:07:40 -- accel/accel.sh@20 -- # IFS=: 00:07:03.124 19:07:40 -- accel/accel.sh@20 -- # read -r var val 00:07:03.124 19:07:40 -- accel/accel.sh@21 -- # val= 00:07:03.124 19:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.124 19:07:40 -- accel/accel.sh@20 -- # IFS=: 00:07:03.124 19:07:40 -- accel/accel.sh@20 -- # read -r var val 00:07:03.124 19:07:40 -- accel/accel.sh@21 -- # val= 00:07:03.124 19:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.124 19:07:40 -- accel/accel.sh@20 -- # IFS=: 00:07:03.124 19:07:40 -- accel/accel.sh@20 -- # read -r var val 00:07:03.124 19:07:40 -- accel/accel.sh@21 -- # val= 00:07:03.124 19:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.124 19:07:40 -- accel/accel.sh@20 -- # IFS=: 00:07:03.124 19:07:40 -- accel/accel.sh@20 -- # read -r var val 00:07:03.124 19:07:40 -- accel/accel.sh@21 -- # val= 00:07:03.124 19:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.124 19:07:40 -- accel/accel.sh@20 -- # IFS=: 00:07:03.124 19:07:40 -- accel/accel.sh@20 -- # read -r var val 00:07:03.124 ************************************ 00:07:03.124 END TEST accel_fill 00:07:03.124 ************************************ 00:07:03.124 19:07:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.124 19:07:40 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:03.124 19:07:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.124 00:07:03.124 real 0m4.636s 00:07:03.124 user 0m4.141s 00:07:03.124 sys 0m0.288s 00:07:03.124 19:07:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.124 19:07:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.124 19:07:40 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:03.124 19:07:40 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:07:03.124 19:07:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:03.124 19:07:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.124 ************************************ 00:07:03.124 START TEST accel_copy_crc32c 00:07:03.124 ************************************ 00:07:03.124 19:07:40 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy_crc32c -y 00:07:03.124 19:07:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.124 19:07:40 -- accel/accel.sh@17 -- # local accel_module 00:07:03.124 19:07:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:03.124 19:07:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:03.124 19:07:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.124 19:07:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.124 19:07:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.124 19:07:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.124 19:07:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.124 19:07:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.124 19:07:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.124 19:07:40 -- accel/accel.sh@42 -- # jq -r . 00:07:03.124 [2024-02-14 19:07:40.390831] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:03.124 [2024-02-14 19:07:40.391022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59795 ] 00:07:03.383 [2024-02-14 19:07:40.562244] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.383 [2024-02-14 19:07:40.715361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.383 [2024-02-14 19:07:40.715727] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:04.761 [2024-02-14 19:07:41.895525] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:05.328 19:07:42 -- accel/accel.sh@18 -- # out=' 00:07:05.328 SPDK Configuration: 00:07:05.328 Core mask: 0x1 00:07:05.328 00:07:05.328 Accel Perf Configuration: 00:07:05.328 Workload Type: copy_crc32c 00:07:05.328 CRC-32C seed: 0 00:07:05.328 Vector size: 4096 bytes 00:07:05.328 Transfer size: 4096 bytes 00:07:05.328 Vector count 1 00:07:05.328 Module: software 00:07:05.328 Queue depth: 32 00:07:05.328 Allocate depth: 32 00:07:05.328 # threads/core: 1 00:07:05.328 Run time: 1 seconds 00:07:05.328 Verify: Yes 00:07:05.328 00:07:05.328 Running for 1 seconds... 00:07:05.328 00:07:05.328 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.328 ------------------------------------------------------------------------------------ 00:07:05.328 0,0 221216/s 864 MiB/s 0 0 00:07:05.328 ==================================================================================== 00:07:05.328 Total 221216/s 864 MiB/s 0 0' 00:07:05.328 19:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:05.328 19:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:05.328 19:07:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:05.329 19:07:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.329 19:07:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:05.329 19:07:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.329 19:07:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.329 19:07:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.329 19:07:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.329 19:07:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.329 19:07:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.329 19:07:42 -- accel/accel.sh@42 -- # jq -r . 00:07:05.329 [2024-02-14 19:07:42.676953] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:05.329 [2024-02-14 19:07:42.677399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59822 ] 00:07:05.588 [2024-02-14 19:07:42.843858] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.847 [2024-02-14 19:07:43.014404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.847 [2024-02-14 19:07:43.014531] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:05.847 19:07:43 -- accel/accel.sh@21 -- # val= 00:07:05.847 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.847 19:07:43 -- accel/accel.sh@21 -- # val= 00:07:05.847 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.847 19:07:43 -- accel/accel.sh@21 -- # val=0x1 00:07:05.847 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.847 19:07:43 -- accel/accel.sh@21 -- # val= 00:07:05.847 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.847 19:07:43 -- accel/accel.sh@21 -- # val= 00:07:05.847 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.847 19:07:43 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:05.847 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.847 19:07:43 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.847 19:07:43 -- accel/accel.sh@21 -- # val=0 00:07:05.847 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.847 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.848 19:07:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.848 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.848 19:07:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.848 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.848 19:07:43 -- accel/accel.sh@21 -- # val= 00:07:05.848 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.848 19:07:43 -- accel/accel.sh@21 -- # val=software 00:07:05.848 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.848 19:07:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.848 19:07:43 -- accel/accel.sh@21 -- # val=32 00:07:05.848 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.848 19:07:43 -- accel/accel.sh@21 -- # val=32 00:07:05.848 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.848 19:07:43 -- accel/accel.sh@21 -- # val=1 00:07:05.848 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.848 19:07:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.848 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.848 19:07:43 -- accel/accel.sh@21 -- # val=Yes 00:07:05.848 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.848 19:07:43 -- accel/accel.sh@21 -- # val= 00:07:05.848 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.848 19:07:43 -- accel/accel.sh@21 -- # val= 00:07:05.848 19:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.848 19:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:06.785 [2024-02-14 19:07:44.177018] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:07.722 19:07:44 -- accel/accel.sh@21 -- # val= 00:07:07.722 19:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.722 19:07:44 -- accel/accel.sh@20 -- # IFS=: 00:07:07.722 19:07:44 -- accel/accel.sh@20 -- # read -r var val 00:07:07.722 19:07:44 -- accel/accel.sh@21 -- # val= 00:07:07.722 19:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.722 19:07:44 -- accel/accel.sh@20 -- # IFS=: 00:07:07.722 19:07:44 -- accel/accel.sh@20 -- # read -r var val 00:07:07.722 19:07:44 -- accel/accel.sh@21 -- # val= 00:07:07.722 19:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.722 19:07:44 -- accel/accel.sh@20 -- # IFS=: 00:07:07.722 19:07:44 -- accel/accel.sh@20 -- # read -r var val 00:07:07.722 19:07:44 -- accel/accel.sh@21 -- # val= 00:07:07.722 19:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.722 19:07:44 -- accel/accel.sh@20 -- # IFS=: 00:07:07.722 19:07:44 -- accel/accel.sh@20 -- # read -r var val 00:07:07.722 19:07:44 -- accel/accel.sh@21 -- # val= 00:07:07.722 19:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.722 19:07:44 -- accel/accel.sh@20 -- # IFS=: 00:07:07.722 19:07:44 -- accel/accel.sh@20 -- # read -r var val 00:07:07.722 19:07:44 -- accel/accel.sh@21 -- # val= 00:07:07.722 19:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.722 19:07:44 -- accel/accel.sh@20 -- # IFS=: 00:07:07.722 19:07:44 -- accel/accel.sh@20 -- # read -r var val 00:07:07.722 19:07:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.722 19:07:44 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:07.722 19:07:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.722 ************************************ 00:07:07.722 END TEST accel_copy_crc32c 00:07:07.722 ************************************ 00:07:07.722 00:07:07.722 real 0m4.569s 00:07:07.722 user 0m4.073s 00:07:07.722 sys 0m0.288s 00:07:07.722 19:07:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.722 19:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 19:07:44 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:07.722 19:07:44 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:07:07.722 19:07:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:07.722 19:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 ************************************ 00:07:07.722 START TEST accel_copy_crc32c_C2 00:07:07.722 ************************************ 00:07:07.722 19:07:44 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:07.722 19:07:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.722 19:07:44 -- accel/accel.sh@17 -- # local accel_module 00:07:07.722 19:07:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:07.722 19:07:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:07.722 19:07:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.722 19:07:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.722 19:07:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.722 19:07:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.722 19:07:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.722 19:07:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.722 19:07:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.722 19:07:44 -- accel/accel.sh@42 -- # jq -r . 00:07:07.722 [2024-02-14 19:07:45.015224] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:07.723 [2024-02-14 19:07:45.015375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59869 ] 00:07:07.981 [2024-02-14 19:07:45.187665] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.981 [2024-02-14 19:07:45.347555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.981 [2024-02-14 19:07:45.347656] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:09.370 [2024-02-14 19:07:46.506487] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:09.939 19:07:47 -- accel/accel.sh@18 -- # out=' 00:07:09.939 SPDK Configuration: 00:07:09.939 Core mask: 0x1 00:07:09.939 00:07:09.939 Accel Perf Configuration: 00:07:09.939 Workload Type: copy_crc32c 00:07:09.939 CRC-32C seed: 0 00:07:09.939 Vector size: 4096 bytes 00:07:09.939 Transfer size: 8192 bytes 00:07:09.939 Vector count 2 00:07:09.939 Module: software 00:07:09.939 Queue depth: 32 00:07:09.939 Allocate depth: 32 00:07:09.939 # threads/core: 1 00:07:09.939 Run time: 1 seconds 00:07:09.939 Verify: Yes 00:07:09.939 00:07:09.939 Running for 1 seconds... 00:07:09.939 00:07:09.939 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.939 ------------------------------------------------------------------------------------ 00:07:09.939 0,0 150880/s 1178 MiB/s 0 0 00:07:09.939 ==================================================================================== 00:07:09.939 Total 150880/s 589 MiB/s 0 0' 00:07:09.939 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.939 19:07:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:09.939 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.939 19:07:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:09.939 19:07:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.939 19:07:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.939 19:07:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.939 19:07:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.939 19:07:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.939 19:07:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.939 19:07:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.939 19:07:47 -- accel/accel.sh@42 -- # jq -r . 00:07:09.939 [2024-02-14 19:07:47.327142] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:09.939 [2024-02-14 19:07:47.327316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59895 ] 00:07:10.199 [2024-02-14 19:07:47.500531] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.459 [2024-02-14 19:07:47.668724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.459 [2024-02-14 19:07:47.668820] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val= 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val= 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val=0x1 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val= 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val= 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val=0 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val= 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val=software 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val=32 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val=32 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val=1 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val=Yes 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val= 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.459 19:07:47 -- accel/accel.sh@21 -- # val= 00:07:10.459 19:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:10.459 19:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:11.836 [2024-02-14 19:07:48.830173] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:12.403 19:07:49 -- accel/accel.sh@21 -- # val= 00:07:12.403 19:07:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 19:07:49 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 19:07:49 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 19:07:49 -- accel/accel.sh@21 -- # val= 00:07:12.403 19:07:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 19:07:49 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 19:07:49 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 19:07:49 -- accel/accel.sh@21 -- # val= 00:07:12.403 19:07:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 19:07:49 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 19:07:49 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 19:07:49 -- accel/accel.sh@21 -- # val= 00:07:12.403 19:07:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 19:07:49 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 19:07:49 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 19:07:49 -- accel/accel.sh@21 -- # val= 00:07:12.403 19:07:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 19:07:49 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 19:07:49 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 19:07:49 -- accel/accel.sh@21 -- # val= 00:07:12.403 19:07:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.403 19:07:49 -- accel/accel.sh@20 -- # IFS=: 00:07:12.403 19:07:49 -- accel/accel.sh@20 -- # read -r var val 00:07:12.403 19:07:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.403 19:07:49 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:12.403 19:07:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.403 00:07:12.403 real 0m4.590s 00:07:12.403 user 0m4.084s 00:07:12.403 sys 0m0.298s 00:07:12.403 19:07:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.403 ************************************ 00:07:12.403 END TEST accel_copy_crc32c_C2 00:07:12.403 ************************************ 00:07:12.403 19:07:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.403 19:07:49 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:12.403 19:07:49 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:07:12.403 19:07:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:12.403 19:07:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.403 ************************************ 00:07:12.403 START TEST accel_dualcast 00:07:12.403 ************************************ 00:07:12.403 19:07:49 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dualcast -y 00:07:12.403 19:07:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.403 19:07:49 -- accel/accel.sh@17 -- # local accel_module 00:07:12.404 19:07:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:12.404 19:07:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:12.404 19:07:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.404 19:07:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.404 19:07:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.404 19:07:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.404 19:07:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.404 19:07:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.404 19:07:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.404 19:07:49 -- accel/accel.sh@42 -- # jq -r . 00:07:12.404 [2024-02-14 19:07:49.660450] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:12.404 [2024-02-14 19:07:49.661462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59936 ] 00:07:12.662 [2024-02-14 19:07:49.834861] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.662 [2024-02-14 19:07:50.044345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.662 [2024-02-14 19:07:50.044443] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:14.039 [2024-02-14 19:07:51.195638] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:14.607 19:07:51 -- accel/accel.sh@18 -- # out=' 00:07:14.607 SPDK Configuration: 00:07:14.607 Core mask: 0x1 00:07:14.607 00:07:14.607 Accel Perf Configuration: 00:07:14.607 Workload Type: dualcast 00:07:14.607 Transfer size: 4096 bytes 00:07:14.607 Vector count 1 00:07:14.607 Module: software 00:07:14.607 Queue depth: 32 00:07:14.607 Allocate depth: 32 00:07:14.607 # threads/core: 1 00:07:14.607 Run time: 1 seconds 00:07:14.607 Verify: Yes 00:07:14.607 00:07:14.607 Running for 1 seconds... 00:07:14.607 00:07:14.607 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.607 ------------------------------------------------------------------------------------ 00:07:14.607 0,0 311168/s 1215 MiB/s 0 0 00:07:14.607 ==================================================================================== 00:07:14.607 Total 311168/s 1215 MiB/s 0 0' 00:07:14.607 19:07:51 -- accel/accel.sh@20 -- # IFS=: 00:07:14.607 19:07:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:14.607 19:07:51 -- accel/accel.sh@20 -- # read -r var val 00:07:14.607 19:07:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:14.607 19:07:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.607 19:07:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.607 19:07:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.607 19:07:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.607 19:07:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.607 19:07:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.607 19:07:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.607 19:07:51 -- accel/accel.sh@42 -- # jq -r . 00:07:14.607 [2024-02-14 19:07:51.972241] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:14.607 [2024-02-14 19:07:51.972409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59962 ] 00:07:14.866 [2024-02-14 19:07:52.143141] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.124 [2024-02-14 19:07:52.301882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.124 [2024-02-14 19:07:52.301991] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val= 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val= 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val=0x1 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val= 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val= 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val=dualcast 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val= 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val=software 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val=32 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val=32 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val=1 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val=Yes 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val= 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.124 19:07:52 -- accel/accel.sh@21 -- # val= 00:07:15.124 19:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.124 19:07:52 -- accel/accel.sh@20 -- # read -r var val 00:07:16.065 [2024-02-14 19:07:53.468613] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:17.006 19:07:54 -- accel/accel.sh@21 -- # val= 00:07:17.006 19:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.006 19:07:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.006 19:07:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.006 19:07:54 -- accel/accel.sh@21 -- # val= 00:07:17.006 19:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.006 19:07:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.006 19:07:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.006 19:07:54 -- accel/accel.sh@21 -- # val= 00:07:17.006 19:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.006 19:07:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.006 19:07:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.006 19:07:54 -- accel/accel.sh@21 -- # val= 00:07:17.006 19:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.006 19:07:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.006 19:07:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.006 19:07:54 -- accel/accel.sh@21 -- # val= 00:07:17.006 19:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.006 19:07:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.006 19:07:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.006 19:07:54 -- accel/accel.sh@21 -- # val= 00:07:17.006 19:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.006 19:07:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.006 19:07:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.006 19:07:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.006 19:07:54 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:17.006 19:07:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.006 00:07:17.006 real 0m4.604s 00:07:17.006 user 0m4.060s 00:07:17.006 sys 0m0.329s 00:07:17.006 19:07:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.006 19:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.006 ************************************ 00:07:17.006 END TEST accel_dualcast 00:07:17.006 ************************************ 00:07:17.006 19:07:54 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:17.006 19:07:54 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:07:17.006 19:07:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:17.006 19:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.006 ************************************ 00:07:17.006 START TEST accel_compare 00:07:17.006 ************************************ 00:07:17.006 19:07:54 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w compare -y 00:07:17.006 19:07:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.006 19:07:54 -- accel/accel.sh@17 -- # local accel_module 00:07:17.006 19:07:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:17.006 19:07:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:17.006 19:07:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.006 19:07:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.006 19:07:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.006 19:07:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.006 19:07:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.006 19:07:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.006 19:07:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.006 19:07:54 -- accel/accel.sh@42 -- # jq -r . 00:07:17.006 [2024-02-14 19:07:54.318341] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:17.006 [2024-02-14 19:07:54.318578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60014 ] 00:07:17.266 [2024-02-14 19:07:54.489869] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.266 [2024-02-14 19:07:54.663747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.266 [2024-02-14 19:07:54.663857] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:18.640 [2024-02-14 19:07:55.831100] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:19.207 19:07:56 -- accel/accel.sh@18 -- # out=' 00:07:19.207 SPDK Configuration: 00:07:19.207 Core mask: 0x1 00:07:19.207 00:07:19.207 Accel Perf Configuration: 00:07:19.207 Workload Type: compare 00:07:19.207 Transfer size: 4096 bytes 00:07:19.207 Vector count 1 00:07:19.207 Module: software 00:07:19.207 Queue depth: 32 00:07:19.207 Allocate depth: 32 00:07:19.207 # threads/core: 1 00:07:19.207 Run time: 1 seconds 00:07:19.207 Verify: Yes 00:07:19.207 00:07:19.207 Running for 1 seconds... 00:07:19.207 00:07:19.207 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.207 ------------------------------------------------------------------------------------ 00:07:19.208 0,0 391200/s 1528 MiB/s 0 0 00:07:19.208 ==================================================================================== 00:07:19.208 Total 391200/s 1528 MiB/s 0 0' 00:07:19.208 19:07:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.208 19:07:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.208 19:07:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:19.208 19:07:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:19.208 19:07:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.208 19:07:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.208 19:07:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.208 19:07:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.208 19:07:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.208 19:07:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.208 19:07:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.208 19:07:56 -- accel/accel.sh@42 -- # jq -r . 00:07:19.208 [2024-02-14 19:07:56.605397] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:19.208 [2024-02-14 19:07:56.605644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60040 ] 00:07:19.466 [2024-02-14 19:07:56.774342] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.724 [2024-02-14 19:07:56.932415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.724 [2024-02-14 19:07:56.932577] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:19.724 19:07:57 -- accel/accel.sh@21 -- # val= 00:07:19.724 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.724 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.724 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.724 19:07:57 -- accel/accel.sh@21 -- # val= 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val=0x1 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val= 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val= 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val=compare 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val= 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val=software 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val=32 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val=32 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val=1 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val=Yes 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val= 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:19.725 19:07:57 -- accel/accel.sh@21 -- # val= 00:07:19.725 19:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:19.725 19:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:21.101 [2024-02-14 19:07:58.095171] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:21.668 19:07:58 -- accel/accel.sh@21 -- # val= 00:07:21.668 19:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.668 19:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:21.668 19:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:21.668 19:07:58 -- accel/accel.sh@21 -- # val= 00:07:21.668 19:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.668 19:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:21.668 19:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:21.668 19:07:58 -- accel/accel.sh@21 -- # val= 00:07:21.668 19:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.668 19:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:21.668 19:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:21.668 19:07:58 -- accel/accel.sh@21 -- # val= 00:07:21.668 19:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.668 19:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:21.668 19:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:21.668 19:07:58 -- accel/accel.sh@21 -- # val= 00:07:21.668 19:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.668 19:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:21.668 19:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:21.668 19:07:58 -- accel/accel.sh@21 -- # val= 00:07:21.668 19:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.668 19:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:21.668 19:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:21.668 19:07:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.668 19:07:58 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:21.668 19:07:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.668 00:07:21.668 real 0m4.568s 00:07:21.668 user 0m4.059s 00:07:21.668 sys 0m0.297s 00:07:21.668 19:07:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.668 19:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:21.668 ************************************ 00:07:21.668 END TEST accel_compare 00:07:21.668 ************************************ 00:07:21.668 19:07:58 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:21.668 19:07:58 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:07:21.668 19:07:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:21.668 19:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:21.668 ************************************ 00:07:21.668 START TEST accel_xor 00:07:21.668 ************************************ 00:07:21.668 19:07:58 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w xor -y 00:07:21.668 19:07:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.668 19:07:58 -- accel/accel.sh@17 -- # local accel_module 00:07:21.668 19:07:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:21.668 19:07:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:21.668 19:07:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.668 19:07:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.668 19:07:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.668 19:07:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.668 19:07:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.668 19:07:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.669 19:07:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.669 19:07:58 -- accel/accel.sh@42 -- # jq -r . 00:07:21.669 [2024-02-14 19:07:58.935200] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:21.669 [2024-02-14 19:07:58.935353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60081 ] 00:07:21.927 [2024-02-14 19:07:59.091851] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.927 [2024-02-14 19:07:59.250923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.927 [2024-02-14 19:07:59.251023] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:23.304 [2024-02-14 19:08:00.401715] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:23.872 19:08:01 -- accel/accel.sh@18 -- # out=' 00:07:23.872 SPDK Configuration: 00:07:23.872 Core mask: 0x1 00:07:23.872 00:07:23.872 Accel Perf Configuration: 00:07:23.872 Workload Type: xor 00:07:23.872 Source buffers: 2 00:07:23.872 Transfer size: 4096 bytes 00:07:23.872 Vector count 1 00:07:23.872 Module: software 00:07:23.872 Queue depth: 32 00:07:23.872 Allocate depth: 32 00:07:23.872 # threads/core: 1 00:07:23.872 Run time: 1 seconds 00:07:23.872 Verify: Yes 00:07:23.872 00:07:23.872 Running for 1 seconds... 00:07:23.872 00:07:23.872 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.872 ------------------------------------------------------------------------------------ 00:07:23.872 0,0 210720/s 823 MiB/s 0 0 00:07:23.872 ==================================================================================== 00:07:23.872 Total 210720/s 823 MiB/s 0 0' 00:07:23.872 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:23.872 19:08:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:23.872 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:23.872 19:08:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:23.872 19:08:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.872 19:08:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.872 19:08:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.872 19:08:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.872 19:08:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.872 19:08:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.872 19:08:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.872 19:08:01 -- accel/accel.sh@42 -- # jq -r . 00:07:23.872 [2024-02-14 19:08:01.203541] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:23.872 [2024-02-14 19:08:01.204071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60107 ] 00:07:24.130 [2024-02-14 19:08:01.372390] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.130 [2024-02-14 19:08:01.532384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.130 [2024-02-14 19:08:01.532825] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:24.389 19:08:01 -- accel/accel.sh@21 -- # val= 00:07:24.389 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.389 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.389 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.389 19:08:01 -- accel/accel.sh@21 -- # val= 00:07:24.389 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.389 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.389 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.389 19:08:01 -- accel/accel.sh@21 -- # val=0x1 00:07:24.389 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.389 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.389 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.389 19:08:01 -- accel/accel.sh@21 -- # val= 00:07:24.389 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.389 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.389 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.389 19:08:01 -- accel/accel.sh@21 -- # val= 00:07:24.389 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.389 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.389 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.389 19:08:01 -- accel/accel.sh@21 -- # val=xor 00:07:24.389 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.389 19:08:01 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.390 19:08:01 -- accel/accel.sh@21 -- # val=2 00:07:24.390 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.390 19:08:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.390 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.390 19:08:01 -- accel/accel.sh@21 -- # val= 00:07:24.390 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.390 19:08:01 -- accel/accel.sh@21 -- # val=software 00:07:24.390 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.390 19:08:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.390 19:08:01 -- accel/accel.sh@21 -- # val=32 00:07:24.390 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.390 19:08:01 -- accel/accel.sh@21 -- # val=32 00:07:24.390 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.390 19:08:01 -- accel/accel.sh@21 -- # val=1 00:07:24.390 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.390 19:08:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.390 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.390 19:08:01 -- accel/accel.sh@21 -- # val=Yes 00:07:24.390 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.390 19:08:01 -- accel/accel.sh@21 -- # val= 00:07:24.390 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:24.390 19:08:01 -- accel/accel.sh@21 -- # val= 00:07:24.390 19:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:24.390 19:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:25.325 [2024-02-14 19:08:02.700801] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:26.316 19:08:03 -- accel/accel.sh@21 -- # val= 00:07:26.316 19:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.316 19:08:03 -- accel/accel.sh@20 -- # IFS=: 00:07:26.316 19:08:03 -- accel/accel.sh@20 -- # read -r var val 00:07:26.316 19:08:03 -- accel/accel.sh@21 -- # val= 00:07:26.316 19:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.316 19:08:03 -- accel/accel.sh@20 -- # IFS=: 00:07:26.316 19:08:03 -- accel/accel.sh@20 -- # read -r var val 00:07:26.316 19:08:03 -- accel/accel.sh@21 -- # val= 00:07:26.316 19:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.316 19:08:03 -- accel/accel.sh@20 -- # IFS=: 00:07:26.316 19:08:03 -- accel/accel.sh@20 -- # read -r var val 00:07:26.316 19:08:03 -- accel/accel.sh@21 -- # val= 00:07:26.316 19:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.316 19:08:03 -- accel/accel.sh@20 -- # IFS=: 00:07:26.316 19:08:03 -- accel/accel.sh@20 -- # read -r var val 00:07:26.316 19:08:03 -- accel/accel.sh@21 -- # val= 00:07:26.316 19:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.316 19:08:03 -- accel/accel.sh@20 -- # IFS=: 00:07:26.316 19:08:03 -- accel/accel.sh@20 -- # read -r var val 00:07:26.316 19:08:03 -- accel/accel.sh@21 -- # val= 00:07:26.316 19:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.316 19:08:03 -- accel/accel.sh@20 -- # IFS=: 00:07:26.316 19:08:03 -- accel/accel.sh@20 -- # read -r var val 00:07:26.316 ************************************ 00:07:26.316 END TEST accel_xor 00:07:26.316 ************************************ 00:07:26.316 19:08:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.316 19:08:03 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:26.316 19:08:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.316 00:07:26.316 real 0m4.556s 00:07:26.316 user 0m4.053s 00:07:26.316 sys 0m0.290s 00:07:26.316 19:08:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.316 19:08:03 -- common/autotest_common.sh@10 -- # set +x 00:07:26.316 19:08:03 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:26.316 19:08:03 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:07:26.316 19:08:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:26.316 19:08:03 -- common/autotest_common.sh@10 -- # set +x 00:07:26.316 ************************************ 00:07:26.316 START TEST accel_xor 00:07:26.316 ************************************ 00:07:26.316 19:08:03 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w xor -y -x 3 00:07:26.316 19:08:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.316 19:08:03 -- accel/accel.sh@17 -- # local accel_module 00:07:26.316 19:08:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:26.316 19:08:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:26.316 19:08:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.316 19:08:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.316 19:08:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.316 19:08:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.316 19:08:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.316 19:08:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.316 19:08:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.316 19:08:03 -- accel/accel.sh@42 -- # jq -r . 00:07:26.316 [2024-02-14 19:08:03.550567] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:26.316 [2024-02-14 19:08:03.550753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60158 ] 00:07:26.316 [2024-02-14 19:08:03.715897] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.575 [2024-02-14 19:08:03.883056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.575 [2024-02-14 19:08:03.883160] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:27.948 [2024-02-14 19:08:05.043497] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:28.515 19:08:05 -- accel/accel.sh@18 -- # out=' 00:07:28.516 SPDK Configuration: 00:07:28.516 Core mask: 0x1 00:07:28.516 00:07:28.516 Accel Perf Configuration: 00:07:28.516 Workload Type: xor 00:07:28.516 Source buffers: 3 00:07:28.516 Transfer size: 4096 bytes 00:07:28.516 Vector count 1 00:07:28.516 Module: software 00:07:28.516 Queue depth: 32 00:07:28.516 Allocate depth: 32 00:07:28.516 # threads/core: 1 00:07:28.516 Run time: 1 seconds 00:07:28.516 Verify: Yes 00:07:28.516 00:07:28.516 Running for 1 seconds... 00:07:28.516 00:07:28.516 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.516 ------------------------------------------------------------------------------------ 00:07:28.516 0,0 194176/s 758 MiB/s 0 0 00:07:28.516 ==================================================================================== 00:07:28.516 Total 194176/s 758 MiB/s 0 0' 00:07:28.516 19:08:05 -- accel/accel.sh@20 -- # IFS=: 00:07:28.516 19:08:05 -- accel/accel.sh@20 -- # read -r var val 00:07:28.516 19:08:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:28.516 19:08:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:28.516 19:08:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.516 19:08:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.516 19:08:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.516 19:08:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.516 19:08:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.516 19:08:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.516 19:08:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.516 19:08:05 -- accel/accel.sh@42 -- # jq -r . 00:07:28.516 [2024-02-14 19:08:05.837865] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:28.516 [2024-02-14 19:08:05.838100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60184 ] 00:07:28.775 [2024-02-14 19:08:06.028861] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.775 [2024-02-14 19:08:06.191359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.775 [2024-02-14 19:08:06.191466] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:29.034 19:08:06 -- accel/accel.sh@21 -- # val= 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val= 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val=0x1 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val= 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val= 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val=xor 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val=3 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val= 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val=software 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val=32 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val=32 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val=1 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val=Yes 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val= 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.035 19:08:06 -- accel/accel.sh@21 -- # val= 00:07:29.035 19:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:29.035 19:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:29.972 [2024-02-14 19:08:07.368617] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:30.913 19:08:08 -- accel/accel.sh@21 -- # val= 00:07:30.913 19:08:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.913 19:08:08 -- accel/accel.sh@20 -- # IFS=: 00:07:30.913 19:08:08 -- accel/accel.sh@20 -- # read -r var val 00:07:30.913 19:08:08 -- accel/accel.sh@21 -- # val= 00:07:30.913 19:08:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.913 19:08:08 -- accel/accel.sh@20 -- # IFS=: 00:07:30.913 19:08:08 -- accel/accel.sh@20 -- # read -r var val 00:07:30.913 19:08:08 -- accel/accel.sh@21 -- # val= 00:07:30.913 19:08:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.913 19:08:08 -- accel/accel.sh@20 -- # IFS=: 00:07:30.913 19:08:08 -- accel/accel.sh@20 -- # read -r var val 00:07:30.913 19:08:08 -- accel/accel.sh@21 -- # val= 00:07:30.913 19:08:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.913 19:08:08 -- accel/accel.sh@20 -- # IFS=: 00:07:30.913 19:08:08 -- accel/accel.sh@20 -- # read -r var val 00:07:30.913 19:08:08 -- accel/accel.sh@21 -- # val= 00:07:30.913 19:08:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.913 19:08:08 -- accel/accel.sh@20 -- # IFS=: 00:07:30.913 19:08:08 -- accel/accel.sh@20 -- # read -r var val 00:07:30.913 19:08:08 -- accel/accel.sh@21 -- # val= 00:07:30.913 19:08:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.913 19:08:08 -- accel/accel.sh@20 -- # IFS=: 00:07:30.913 19:08:08 -- accel/accel.sh@20 -- # read -r var val 00:07:30.913 19:08:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.913 19:08:08 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:30.913 19:08:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.913 00:07:30.913 real 0m4.614s 00:07:30.913 user 0m4.085s 00:07:30.913 sys 0m0.317s 00:07:30.913 19:08:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.913 19:08:08 -- common/autotest_common.sh@10 -- # set +x 00:07:30.913 ************************************ 00:07:30.913 END TEST accel_xor 00:07:30.913 ************************************ 00:07:30.913 19:08:08 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:30.913 19:08:08 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:07:30.913 19:08:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:30.913 19:08:08 -- common/autotest_common.sh@10 -- # set +x 00:07:30.913 ************************************ 00:07:30.913 START TEST accel_dif_verify 00:07:30.913 ************************************ 00:07:30.913 19:08:08 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_verify 00:07:30.913 19:08:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.913 19:08:08 -- accel/accel.sh@17 -- # local accel_module 00:07:30.913 19:08:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:30.913 19:08:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:30.913 19:08:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.913 19:08:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.913 19:08:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.913 19:08:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.913 19:08:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.913 19:08:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.913 19:08:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.913 19:08:08 -- accel/accel.sh@42 -- # jq -r . 00:07:30.913 [2024-02-14 19:08:08.223649] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:30.913 [2024-02-14 19:08:08.223819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60230 ] 00:07:31.173 [2024-02-14 19:08:08.394115] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.173 [2024-02-14 19:08:08.554500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.173 [2024-02-14 19:08:08.554678] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:32.552 [2024-02-14 19:08:09.715299] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:33.121 19:08:10 -- accel/accel.sh@18 -- # out=' 00:07:33.121 SPDK Configuration: 00:07:33.121 Core mask: 0x1 00:07:33.121 00:07:33.121 Accel Perf Configuration: 00:07:33.121 Workload Type: dif_verify 00:07:33.121 Vector size: 4096 bytes 00:07:33.121 Transfer size: 4096 bytes 00:07:33.121 Block size: 512 bytes 00:07:33.121 Metadata size: 8 bytes 00:07:33.121 Vector count 1 00:07:33.121 Module: software 00:07:33.121 Queue depth: 32 00:07:33.121 Allocate depth: 32 00:07:33.121 # threads/core: 1 00:07:33.121 Run time: 1 seconds 00:07:33.121 Verify: No 00:07:33.121 00:07:33.121 Running for 1 seconds... 00:07:33.121 00:07:33.122 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.122 ------------------------------------------------------------------------------------ 00:07:33.122 0,0 98720/s 391 MiB/s 0 0 00:07:33.122 ==================================================================================== 00:07:33.122 Total 98720/s 385 MiB/s 0 0' 00:07:33.122 19:08:10 -- accel/accel.sh@20 -- # IFS=: 00:07:33.122 19:08:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:33.122 19:08:10 -- accel/accel.sh@20 -- # read -r var val 00:07:33.122 19:08:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:33.122 19:08:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.122 19:08:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.122 19:08:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.122 19:08:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.122 19:08:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.122 19:08:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.122 19:08:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.122 19:08:10 -- accel/accel.sh@42 -- # jq -r . 00:07:33.122 [2024-02-14 19:08:10.503854] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:33.122 [2024-02-14 19:08:10.504042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60262 ] 00:07:33.382 [2024-02-14 19:08:10.676431] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.642 [2024-02-14 19:08:10.846517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.642 [2024-02-14 19:08:10.846695] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val= 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val= 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val=0x1 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val= 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val= 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val=dif_verify 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val= 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val=software 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val=32 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val=32 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val=1 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val=No 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val= 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:33.642 19:08:11 -- accel/accel.sh@21 -- # val= 00:07:33.642 19:08:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # IFS=: 00:07:33.642 19:08:11 -- accel/accel.sh@20 -- # read -r var val 00:07:35.044 [2024-02-14 19:08:12.032545] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:35.613 19:08:12 -- accel/accel.sh@21 -- # val= 00:07:35.613 19:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.613 19:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:35.613 19:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:35.613 19:08:12 -- accel/accel.sh@21 -- # val= 00:07:35.613 19:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.613 19:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:35.613 19:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:35.613 19:08:12 -- accel/accel.sh@21 -- # val= 00:07:35.613 19:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.613 19:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:35.613 19:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:35.613 19:08:12 -- accel/accel.sh@21 -- # val= 00:07:35.613 19:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.613 19:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:35.613 19:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:35.613 19:08:12 -- accel/accel.sh@21 -- # val= 00:07:35.613 19:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.613 19:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:35.613 19:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:35.613 19:08:12 -- accel/accel.sh@21 -- # val= 00:07:35.613 19:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.613 19:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:35.613 19:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:35.613 19:08:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.613 19:08:12 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:35.613 ************************************ 00:07:35.613 END TEST accel_dif_verify 00:07:35.613 ************************************ 00:07:35.613 19:08:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.613 00:07:35.613 real 0m4.782s 00:07:35.613 user 0m4.266s 00:07:35.613 sys 0m0.306s 00:07:35.613 19:08:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.613 19:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:35.613 19:08:12 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:35.613 19:08:12 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:07:35.613 19:08:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:35.613 19:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:35.613 ************************************ 00:07:35.613 START TEST accel_dif_generate 00:07:35.613 ************************************ 00:07:35.613 19:08:13 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_generate 00:07:35.613 19:08:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.613 19:08:13 -- accel/accel.sh@17 -- # local accel_module 00:07:35.613 19:08:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:35.613 19:08:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:35.613 19:08:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.613 19:08:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.613 19:08:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.613 19:08:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.613 19:08:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.613 19:08:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.613 19:08:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.613 19:08:13 -- accel/accel.sh@42 -- # jq -r . 00:07:35.872 [2024-02-14 19:08:13.058613] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:35.873 [2024-02-14 19:08:13.058805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60303 ] 00:07:35.873 [2024-02-14 19:08:13.226612] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.131 [2024-02-14 19:08:13.398005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.131 [2024-02-14 19:08:13.398116] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:37.508 [2024-02-14 19:08:14.579732] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:38.075 19:08:15 -- accel/accel.sh@18 -- # out=' 00:07:38.075 SPDK Configuration: 00:07:38.075 Core mask: 0x1 00:07:38.075 00:07:38.075 Accel Perf Configuration: 00:07:38.075 Workload Type: dif_generate 00:07:38.075 Vector size: 4096 bytes 00:07:38.075 Transfer size: 4096 bytes 00:07:38.075 Block size: 512 bytes 00:07:38.075 Metadata size: 8 bytes 00:07:38.075 Vector count 1 00:07:38.075 Module: software 00:07:38.075 Queue depth: 32 00:07:38.075 Allocate depth: 32 00:07:38.075 # threads/core: 1 00:07:38.075 Run time: 1 seconds 00:07:38.075 Verify: No 00:07:38.075 00:07:38.075 Running for 1 seconds... 00:07:38.075 00:07:38.076 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.076 ------------------------------------------------------------------------------------ 00:07:38.076 0,0 116000/s 460 MiB/s 0 0 00:07:38.076 ==================================================================================== 00:07:38.076 Total 116000/s 453 MiB/s 0 0' 00:07:38.076 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.076 19:08:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:38.076 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.076 19:08:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:38.076 19:08:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.076 19:08:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.076 19:08:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.076 19:08:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.076 19:08:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.076 19:08:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.076 19:08:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.076 19:08:15 -- accel/accel.sh@42 -- # jq -r . 00:07:38.076 [2024-02-14 19:08:15.382830] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:38.076 [2024-02-14 19:08:15.383328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60329 ] 00:07:38.335 [2024-02-14 19:08:15.557045] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.335 [2024-02-14 19:08:15.714372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.335 [2024-02-14 19:08:15.714496] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val= 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val= 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val=0x1 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val= 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val= 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val=dif_generate 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val= 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val=software 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val=32 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val=32 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.594 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.594 19:08:15 -- accel/accel.sh@21 -- # val=1 00:07:38.594 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.595 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.595 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.595 19:08:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.595 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.595 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.595 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.595 19:08:15 -- accel/accel.sh@21 -- # val=No 00:07:38.595 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.595 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.595 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.595 19:08:15 -- accel/accel.sh@21 -- # val= 00:07:38.595 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.595 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.595 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:38.595 19:08:15 -- accel/accel.sh@21 -- # val= 00:07:38.595 19:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.595 19:08:15 -- accel/accel.sh@20 -- # IFS=: 00:07:38.595 19:08:15 -- accel/accel.sh@20 -- # read -r var val 00:07:39.531 [2024-02-14 19:08:16.889008] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:40.469 19:08:17 -- accel/accel.sh@21 -- # val= 00:07:40.469 19:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.469 19:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:40.469 19:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:40.469 19:08:17 -- accel/accel.sh@21 -- # val= 00:07:40.469 19:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.469 19:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:40.469 19:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:40.469 19:08:17 -- accel/accel.sh@21 -- # val= 00:07:40.469 19:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.469 19:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:40.469 19:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:40.469 19:08:17 -- accel/accel.sh@21 -- # val= 00:07:40.469 19:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.469 19:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:40.469 19:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:40.469 19:08:17 -- accel/accel.sh@21 -- # val= 00:07:40.469 19:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.469 19:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:40.469 19:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:40.469 19:08:17 -- accel/accel.sh@21 -- # val= 00:07:40.469 19:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.469 19:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:40.469 19:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:40.469 19:08:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.469 19:08:17 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:40.469 19:08:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.469 00:07:40.469 real 0m4.633s 00:07:40.469 user 0m4.145s 00:07:40.469 sys 0m0.281s 00:07:40.469 19:08:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.469 ************************************ 00:07:40.469 END TEST accel_dif_generate 00:07:40.469 ************************************ 00:07:40.469 19:08:17 -- common/autotest_common.sh@10 -- # set +x 00:07:40.469 19:08:17 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:40.469 19:08:17 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:07:40.469 19:08:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:40.469 19:08:17 -- common/autotest_common.sh@10 -- # set +x 00:07:40.469 ************************************ 00:07:40.469 START TEST accel_dif_generate_copy 00:07:40.469 ************************************ 00:07:40.469 19:08:17 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_generate_copy 00:07:40.469 19:08:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.469 19:08:17 -- accel/accel.sh@17 -- # local accel_module 00:07:40.469 19:08:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:40.469 19:08:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:40.469 19:08:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.469 19:08:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.469 19:08:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.469 19:08:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.469 19:08:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.469 19:08:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.469 19:08:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.469 19:08:17 -- accel/accel.sh@42 -- # jq -r . 00:07:40.469 [2024-02-14 19:08:17.750386] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:40.469 [2024-02-14 19:08:17.750601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60381 ] 00:07:40.729 [2024-02-14 19:08:17.922425] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.729 [2024-02-14 19:08:18.082776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.729 [2024-02-14 19:08:18.082859] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:42.121 [2024-02-14 19:08:19.248496] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:42.689 19:08:19 -- accel/accel.sh@18 -- # out=' 00:07:42.689 SPDK Configuration: 00:07:42.689 Core mask: 0x1 00:07:42.689 00:07:42.689 Accel Perf Configuration: 00:07:42.689 Workload Type: dif_generate_copy 00:07:42.689 Vector size: 4096 bytes 00:07:42.689 Transfer size: 4096 bytes 00:07:42.689 Vector count 1 00:07:42.689 Module: software 00:07:42.689 Queue depth: 32 00:07:42.689 Allocate depth: 32 00:07:42.689 # threads/core: 1 00:07:42.689 Run time: 1 seconds 00:07:42.689 Verify: No 00:07:42.689 00:07:42.689 Running for 1 seconds... 00:07:42.689 00:07:42.689 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:42.689 ------------------------------------------------------------------------------------ 00:07:42.689 0,0 88672/s 351 MiB/s 0 0 00:07:42.689 ==================================================================================== 00:07:42.689 Total 88672/s 346 MiB/s 0 0' 00:07:42.689 19:08:19 -- accel/accel.sh@20 -- # IFS=: 00:07:42.689 19:08:19 -- accel/accel.sh@20 -- # read -r var val 00:07:42.689 19:08:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:42.689 19:08:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:42.689 19:08:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.689 19:08:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.689 19:08:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.689 19:08:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.689 19:08:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.689 19:08:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.689 19:08:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.689 19:08:19 -- accel/accel.sh@42 -- # jq -r . 00:07:42.689 [2024-02-14 19:08:20.023055] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:42.689 [2024-02-14 19:08:20.023218] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60407 ] 00:07:42.948 [2024-02-14 19:08:20.194265] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.948 [2024-02-14 19:08:20.350634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.948 [2024-02-14 19:08:20.350736] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:43.208 19:08:20 -- accel/accel.sh@21 -- # val= 00:07:43.208 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.208 19:08:20 -- accel/accel.sh@21 -- # val= 00:07:43.208 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.208 19:08:20 -- accel/accel.sh@21 -- # val=0x1 00:07:43.208 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.208 19:08:20 -- accel/accel.sh@21 -- # val= 00:07:43.208 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.208 19:08:20 -- accel/accel.sh@21 -- # val= 00:07:43.208 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.208 19:08:20 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:43.208 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.208 19:08:20 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.208 19:08:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:43.208 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.208 19:08:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:43.208 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.208 19:08:20 -- accel/accel.sh@21 -- # val= 00:07:43.208 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.208 19:08:20 -- accel/accel.sh@21 -- # val=software 00:07:43.208 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.208 19:08:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.208 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.209 19:08:20 -- accel/accel.sh@21 -- # val=32 00:07:43.209 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.209 19:08:20 -- accel/accel.sh@21 -- # val=32 00:07:43.209 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.209 19:08:20 -- accel/accel.sh@21 -- # val=1 00:07:43.209 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.209 19:08:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:43.209 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.209 19:08:20 -- accel/accel.sh@21 -- # val=No 00:07:43.209 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.209 19:08:20 -- accel/accel.sh@21 -- # val= 00:07:43.209 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:43.209 19:08:20 -- accel/accel.sh@21 -- # val= 00:07:43.209 19:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # IFS=: 00:07:43.209 19:08:20 -- accel/accel.sh@20 -- # read -r var val 00:07:44.199 [2024-02-14 19:08:21.514991] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:45.138 19:08:22 -- accel/accel.sh@21 -- # val= 00:07:45.138 19:08:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.138 19:08:22 -- accel/accel.sh@20 -- # IFS=: 00:07:45.138 19:08:22 -- accel/accel.sh@20 -- # read -r var val 00:07:45.138 19:08:22 -- accel/accel.sh@21 -- # val= 00:07:45.138 19:08:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.138 19:08:22 -- accel/accel.sh@20 -- # IFS=: 00:07:45.138 19:08:22 -- accel/accel.sh@20 -- # read -r var val 00:07:45.138 19:08:22 -- accel/accel.sh@21 -- # val= 00:07:45.138 19:08:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.138 19:08:22 -- accel/accel.sh@20 -- # IFS=: 00:07:45.138 19:08:22 -- accel/accel.sh@20 -- # read -r var val 00:07:45.138 19:08:22 -- accel/accel.sh@21 -- # val= 00:07:45.138 19:08:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.138 19:08:22 -- accel/accel.sh@20 -- # IFS=: 00:07:45.138 19:08:22 -- accel/accel.sh@20 -- # read -r var val 00:07:45.138 19:08:22 -- accel/accel.sh@21 -- # val= 00:07:45.138 19:08:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.138 19:08:22 -- accel/accel.sh@20 -- # IFS=: 00:07:45.138 19:08:22 -- accel/accel.sh@20 -- # read -r var val 00:07:45.138 19:08:22 -- accel/accel.sh@21 -- # val= 00:07:45.138 19:08:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.138 19:08:22 -- accel/accel.sh@20 -- # IFS=: 00:07:45.138 19:08:22 -- accel/accel.sh@20 -- # read -r var val 00:07:45.138 19:08:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.138 ************************************ 00:07:45.138 END TEST accel_dif_generate_copy 00:07:45.138 ************************************ 00:07:45.138 19:08:22 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:45.138 19:08:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.138 00:07:45.138 real 0m4.558s 00:07:45.138 user 0m4.024s 00:07:45.138 sys 0m0.325s 00:07:45.138 19:08:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.138 19:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.138 19:08:22 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:45.138 19:08:22 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.138 19:08:22 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:07:45.138 19:08:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:45.138 19:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:45.139 ************************************ 00:07:45.139 START TEST accel_comp 00:07:45.139 ************************************ 00:07:45.139 19:08:22 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.139 19:08:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.139 19:08:22 -- accel/accel.sh@17 -- # local accel_module 00:07:45.139 19:08:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.139 19:08:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.139 19:08:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.139 19:08:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.139 19:08:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.139 19:08:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.139 19:08:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.139 19:08:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.139 19:08:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.139 19:08:22 -- accel/accel.sh@42 -- # jq -r . 00:07:45.139 [2024-02-14 19:08:22.366528] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:45.139 [2024-02-14 19:08:22.366704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60448 ] 00:07:45.139 [2024-02-14 19:08:22.536930] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.398 [2024-02-14 19:08:22.704644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.398 [2024-02-14 19:08:22.704752] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:46.776 [2024-02-14 19:08:23.867965] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:47.345 19:08:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:47.345 00:07:47.345 SPDK Configuration: 00:07:47.345 Core mask: 0x1 00:07:47.345 00:07:47.345 Accel Perf Configuration: 00:07:47.345 Workload Type: compress 00:07:47.345 Transfer size: 4096 bytes 00:07:47.345 Vector count 1 00:07:47.345 Module: software 00:07:47.345 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.345 Queue depth: 32 00:07:47.345 Allocate depth: 32 00:07:47.345 # threads/core: 1 00:07:47.345 Run time: 1 seconds 00:07:47.345 Verify: No 00:07:47.345 00:07:47.345 Running for 1 seconds... 00:07:47.345 00:07:47.345 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.345 ------------------------------------------------------------------------------------ 00:07:47.345 0,0 47648/s 198 MiB/s 0 0 00:07:47.345 ==================================================================================== 00:07:47.345 Total 47648/s 186 MiB/s 0 0' 00:07:47.345 19:08:24 -- accel/accel.sh@20 -- # IFS=: 00:07:47.345 19:08:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.345 19:08:24 -- accel/accel.sh@20 -- # read -r var val 00:07:47.345 19:08:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.345 19:08:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.345 19:08:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.345 19:08:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.345 19:08:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.345 19:08:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.345 19:08:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.345 19:08:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.345 19:08:24 -- accel/accel.sh@42 -- # jq -r . 00:07:47.345 [2024-02-14 19:08:24.648936] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:47.345 [2024-02-14 19:08:24.649101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60474 ] 00:07:47.605 [2024-02-14 19:08:24.814060] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.605 [2024-02-14 19:08:25.013992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.605 [2024-02-14 19:08:25.014094] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val= 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val= 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val= 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val=0x1 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val= 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val= 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val=compress 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val= 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val=software 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val=32 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val=32 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val=1 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val=No 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val= 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:47.864 19:08:25 -- accel/accel.sh@21 -- # val= 00:07:47.864 19:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # IFS=: 00:07:47.864 19:08:25 -- accel/accel.sh@20 -- # read -r var val 00:07:48.803 [2024-02-14 19:08:26.186198] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:49.741 19:08:26 -- accel/accel.sh@21 -- # val= 00:07:49.741 19:08:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.741 19:08:26 -- accel/accel.sh@20 -- # IFS=: 00:07:49.741 19:08:26 -- accel/accel.sh@20 -- # read -r var val 00:07:49.741 19:08:26 -- accel/accel.sh@21 -- # val= 00:07:49.741 19:08:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.741 19:08:26 -- accel/accel.sh@20 -- # IFS=: 00:07:49.741 19:08:26 -- accel/accel.sh@20 -- # read -r var val 00:07:49.741 19:08:26 -- accel/accel.sh@21 -- # val= 00:07:49.741 19:08:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.741 19:08:26 -- accel/accel.sh@20 -- # IFS=: 00:07:49.741 19:08:26 -- accel/accel.sh@20 -- # read -r var val 00:07:49.741 19:08:26 -- accel/accel.sh@21 -- # val= 00:07:49.741 19:08:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.741 19:08:26 -- accel/accel.sh@20 -- # IFS=: 00:07:49.741 19:08:26 -- accel/accel.sh@20 -- # read -r var val 00:07:49.741 19:08:26 -- accel/accel.sh@21 -- # val= 00:07:49.741 19:08:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.741 19:08:26 -- accel/accel.sh@20 -- # IFS=: 00:07:49.741 19:08:26 -- accel/accel.sh@20 -- # read -r var val 00:07:49.741 19:08:26 -- accel/accel.sh@21 -- # val= 00:07:49.741 19:08:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.741 19:08:26 -- accel/accel.sh@20 -- # IFS=: 00:07:49.741 19:08:26 -- accel/accel.sh@20 -- # read -r var val 00:07:49.741 19:08:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:49.741 ************************************ 00:07:49.741 END TEST accel_comp 00:07:49.741 ************************************ 00:07:49.741 19:08:26 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:49.741 19:08:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.741 00:07:49.741 real 0m4.611s 00:07:49.741 user 0m4.112s 00:07:49.741 sys 0m0.289s 00:07:49.741 19:08:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.741 19:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:49.741 19:08:26 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:49.741 19:08:26 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:07:49.741 19:08:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:49.741 19:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:49.741 ************************************ 00:07:49.741 START TEST accel_decomp 00:07:49.741 ************************************ 00:07:49.741 19:08:26 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:49.741 19:08:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.741 19:08:26 -- accel/accel.sh@17 -- # local accel_module 00:07:49.741 19:08:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:49.741 19:08:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:49.741 19:08:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.741 19:08:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.741 19:08:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.741 19:08:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.741 19:08:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.741 19:08:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.741 19:08:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.741 19:08:26 -- accel/accel.sh@42 -- # jq -r . 00:07:49.741 [2024-02-14 19:08:27.033729] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:49.741 [2024-02-14 19:08:27.033957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60521 ] 00:07:50.001 [2024-02-14 19:08:27.200594] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.002 [2024-02-14 19:08:27.369059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.002 [2024-02-14 19:08:27.369183] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:51.428 [2024-02-14 19:08:28.547122] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:51.997 19:08:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:51.997 00:07:51.997 SPDK Configuration: 00:07:51.997 Core mask: 0x1 00:07:51.997 00:07:51.997 Accel Perf Configuration: 00:07:51.997 Workload Type: decompress 00:07:51.997 Transfer size: 4096 bytes 00:07:51.997 Vector count 1 00:07:51.997 Module: software 00:07:51.997 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.997 Queue depth: 32 00:07:51.997 Allocate depth: 32 00:07:51.997 # threads/core: 1 00:07:51.997 Run time: 1 seconds 00:07:51.997 Verify: Yes 00:07:51.997 00:07:51.997 Running for 1 seconds... 00:07:51.997 00:07:51.997 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:51.997 ------------------------------------------------------------------------------------ 00:07:51.997 0,0 65536/s 120 MiB/s 0 0 00:07:51.997 ==================================================================================== 00:07:51.997 Total 65536/s 256 MiB/s 0 0' 00:07:51.997 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:51.997 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:51.997 19:08:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:51.997 19:08:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.997 19:08:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:51.997 19:08:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.997 19:08:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.997 19:08:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.997 19:08:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.997 19:08:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.997 19:08:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.997 19:08:29 -- accel/accel.sh@42 -- # jq -r . 00:07:51.997 [2024-02-14 19:08:29.319269] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:51.997 [2024-02-14 19:08:29.319430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60547 ] 00:07:52.256 [2024-02-14 19:08:29.495733] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.567 [2024-02-14 19:08:29.721949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.567 [2024-02-14 19:08:29.722026] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val= 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val= 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val= 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val=0x1 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val= 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val= 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val=decompress 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val= 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val=software 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val=32 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val=32 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val=1 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val=Yes 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val= 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:52.567 19:08:29 -- accel/accel.sh@21 -- # val= 00:07:52.567 19:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # IFS=: 00:07:52.567 19:08:29 -- accel/accel.sh@20 -- # read -r var val 00:07:53.505 [2024-02-14 19:08:30.891265] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:54.443 19:08:31 -- accel/accel.sh@21 -- # val= 00:07:54.443 19:08:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.443 19:08:31 -- accel/accel.sh@20 -- # IFS=: 00:07:54.443 19:08:31 -- accel/accel.sh@20 -- # read -r var val 00:07:54.443 19:08:31 -- accel/accel.sh@21 -- # val= 00:07:54.443 19:08:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.443 19:08:31 -- accel/accel.sh@20 -- # IFS=: 00:07:54.443 19:08:31 -- accel/accel.sh@20 -- # read -r var val 00:07:54.443 19:08:31 -- accel/accel.sh@21 -- # val= 00:07:54.443 19:08:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.443 19:08:31 -- accel/accel.sh@20 -- # IFS=: 00:07:54.443 19:08:31 -- accel/accel.sh@20 -- # read -r var val 00:07:54.443 19:08:31 -- accel/accel.sh@21 -- # val= 00:07:54.443 19:08:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.443 19:08:31 -- accel/accel.sh@20 -- # IFS=: 00:07:54.443 19:08:31 -- accel/accel.sh@20 -- # read -r var val 00:07:54.443 19:08:31 -- accel/accel.sh@21 -- # val= 00:07:54.443 19:08:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.443 19:08:31 -- accel/accel.sh@20 -- # IFS=: 00:07:54.443 19:08:31 -- accel/accel.sh@20 -- # read -r var val 00:07:54.443 19:08:31 -- accel/accel.sh@21 -- # val= 00:07:54.443 19:08:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.443 19:08:31 -- accel/accel.sh@20 -- # IFS=: 00:07:54.443 19:08:31 -- accel/accel.sh@20 -- # read -r var val 00:07:54.443 19:08:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:54.443 19:08:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:54.443 19:08:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.443 00:07:54.443 real 0m4.655s 00:07:54.443 user 0m4.145s 00:07:54.443 sys 0m0.304s 00:07:54.443 19:08:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.443 ************************************ 00:07:54.443 END TEST accel_decomp 00:07:54.444 ************************************ 00:07:54.444 19:08:31 -- common/autotest_common.sh@10 -- # set +x 00:07:54.444 19:08:31 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:54.444 19:08:31 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:07:54.444 19:08:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:54.444 19:08:31 -- common/autotest_common.sh@10 -- # set +x 00:07:54.444 ************************************ 00:07:54.444 START TEST accel_decmop_full 00:07:54.444 ************************************ 00:07:54.444 19:08:31 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:54.444 19:08:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.444 19:08:31 -- accel/accel.sh@17 -- # local accel_module 00:07:54.444 19:08:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:54.444 19:08:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:54.444 19:08:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.444 19:08:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.444 19:08:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.444 19:08:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.444 19:08:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.444 19:08:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.444 19:08:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.444 19:08:31 -- accel/accel.sh@42 -- # jq -r . 00:07:54.444 [2024-02-14 19:08:31.740984] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:54.444 [2024-02-14 19:08:31.741160] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60593 ] 00:07:54.703 [2024-02-14 19:08:31.911284] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.703 [2024-02-14 19:08:32.070614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.703 [2024-02-14 19:08:32.070715] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:56.082 [2024-02-14 19:08:33.247280] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:56.649 19:08:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:56.649 00:07:56.649 SPDK Configuration: 00:07:56.649 Core mask: 0x1 00:07:56.649 00:07:56.649 Accel Perf Configuration: 00:07:56.649 Workload Type: decompress 00:07:56.649 Transfer size: 111250 bytes 00:07:56.649 Vector count 1 00:07:56.649 Module: software 00:07:56.649 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:56.649 Queue depth: 32 00:07:56.649 Allocate depth: 32 00:07:56.649 # threads/core: 1 00:07:56.649 Run time: 1 seconds 00:07:56.649 Verify: Yes 00:07:56.649 00:07:56.649 Running for 1 seconds... 00:07:56.649 00:07:56.649 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:56.649 ------------------------------------------------------------------------------------ 00:07:56.649 0,0 4640/s 191 MiB/s 0 0 00:07:56.649 ==================================================================================== 00:07:56.649 Total 4640/s 492 MiB/s 0 0' 00:07:56.649 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:56.649 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:56.649 19:08:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.649 19:08:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.649 19:08:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.649 19:08:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.649 19:08:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.649 19:08:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.649 19:08:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.649 19:08:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.649 19:08:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.649 19:08:34 -- accel/accel.sh@42 -- # jq -r . 00:07:56.908 [2024-02-14 19:08:34.091224] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:56.908 [2024-02-14 19:08:34.091390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60625 ] 00:07:56.908 [2024-02-14 19:08:34.260723] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.167 [2024-02-14 19:08:34.437603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.167 [2024-02-14 19:08:34.437723] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val= 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val= 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val= 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val=0x1 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val= 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val= 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val=decompress 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val= 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val=software 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val=32 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val=32 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val=1 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val=Yes 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val= 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:57.426 19:08:34 -- accel/accel.sh@21 -- # val= 00:07:57.426 19:08:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # IFS=: 00:07:57.426 19:08:34 -- accel/accel.sh@20 -- # read -r var val 00:07:58.361 [2024-02-14 19:08:35.623874] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:59.298 19:08:36 -- accel/accel.sh@21 -- # val= 00:07:59.298 19:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.298 19:08:36 -- accel/accel.sh@20 -- # IFS=: 00:07:59.298 19:08:36 -- accel/accel.sh@20 -- # read -r var val 00:07:59.298 19:08:36 -- accel/accel.sh@21 -- # val= 00:07:59.298 19:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.298 19:08:36 -- accel/accel.sh@20 -- # IFS=: 00:07:59.298 19:08:36 -- accel/accel.sh@20 -- # read -r var val 00:07:59.298 19:08:36 -- accel/accel.sh@21 -- # val= 00:07:59.298 19:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.298 19:08:36 -- accel/accel.sh@20 -- # IFS=: 00:07:59.298 19:08:36 -- accel/accel.sh@20 -- # read -r var val 00:07:59.298 19:08:36 -- accel/accel.sh@21 -- # val= 00:07:59.298 19:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.298 19:08:36 -- accel/accel.sh@20 -- # IFS=: 00:07:59.298 19:08:36 -- accel/accel.sh@20 -- # read -r var val 00:07:59.298 19:08:36 -- accel/accel.sh@21 -- # val= 00:07:59.298 19:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.298 19:08:36 -- accel/accel.sh@20 -- # IFS=: 00:07:59.298 19:08:36 -- accel/accel.sh@20 -- # read -r var val 00:07:59.298 19:08:36 -- accel/accel.sh@21 -- # val= 00:07:59.298 19:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.298 19:08:36 -- accel/accel.sh@20 -- # IFS=: 00:07:59.298 19:08:36 -- accel/accel.sh@20 -- # read -r var val 00:07:59.298 19:08:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.298 19:08:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:59.298 19:08:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.298 00:07:59.298 real 0m4.675s 00:07:59.298 user 0m4.186s 00:07:59.298 sys 0m0.279s 00:07:59.298 ************************************ 00:07:59.298 END TEST accel_decmop_full 00:07:59.298 ************************************ 00:07:59.298 19:08:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.298 19:08:36 -- common/autotest_common.sh@10 -- # set +x 00:07:59.298 19:08:36 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.298 19:08:36 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:07:59.298 19:08:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:59.298 19:08:36 -- common/autotest_common.sh@10 -- # set +x 00:07:59.298 ************************************ 00:07:59.298 START TEST accel_decomp_mcore 00:07:59.298 ************************************ 00:07:59.298 19:08:36 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.298 19:08:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.298 19:08:36 -- accel/accel.sh@17 -- # local accel_module 00:07:59.298 19:08:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.298 19:08:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.298 19:08:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.298 19:08:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.299 19:08:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.299 19:08:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.299 19:08:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.299 19:08:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.299 19:08:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.299 19:08:36 -- accel/accel.sh@42 -- # jq -r . 00:07:59.299 [2024-02-14 19:08:36.469735] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:59.299 [2024-02-14 19:08:36.469920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60666 ] 00:07:59.299 [2024-02-14 19:08:36.643655] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.557 [2024-02-14 19:08:36.839059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.557 [2024-02-14 19:08:36.839162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.557 [2024-02-14 19:08:36.839298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.557 [2024-02-14 19:08:36.839311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.557 [2024-02-14 19:08:36.839918] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:00.935 [2024-02-14 19:08:38.041302] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:08:01.510 19:08:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:01.510 00:08:01.510 SPDK Configuration: 00:08:01.510 Core mask: 0xf 00:08:01.510 00:08:01.510 Accel Perf Configuration: 00:08:01.510 Workload Type: decompress 00:08:01.510 Transfer size: 4096 bytes 00:08:01.510 Vector count 1 00:08:01.510 Module: software 00:08:01.510 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:01.510 Queue depth: 32 00:08:01.510 Allocate depth: 32 00:08:01.510 # threads/core: 1 00:08:01.510 Run time: 1 seconds 00:08:01.510 Verify: Yes 00:08:01.510 00:08:01.510 Running for 1 seconds... 00:08:01.510 00:08:01.510 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:01.510 ------------------------------------------------------------------------------------ 00:08:01.510 0,0 53888/s 99 MiB/s 0 0 00:08:01.510 3,0 53504/s 98 MiB/s 0 0 00:08:01.510 2,0 50400/s 92 MiB/s 0 0 00:08:01.510 1,0 53920/s 99 MiB/s 0 0 00:08:01.510 ==================================================================================== 00:08:01.510 Total 211712/s 827 MiB/s 0 0' 00:08:01.510 19:08:38 -- accel/accel.sh@20 -- # IFS=: 00:08:01.510 19:08:38 -- accel/accel.sh@20 -- # read -r var val 00:08:01.510 19:08:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:01.510 19:08:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:01.510 19:08:38 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.510 19:08:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.510 19:08:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.510 19:08:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.510 19:08:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.510 19:08:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.510 19:08:38 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.510 19:08:38 -- accel/accel.sh@42 -- # jq -r . 00:08:01.510 [2024-02-14 19:08:38.914971] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:01.510 [2024-02-14 19:08:38.915135] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60695 ] 00:08:01.768 [2024-02-14 19:08:39.098405] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.027 [2024-02-14 19:08:39.286828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.027 [2024-02-14 19:08:39.286965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.028 [2024-02-14 19:08:39.287062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.028 [2024-02-14 19:08:39.287311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.028 [2024-02-14 19:08:39.287391] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:02.286 19:08:39 -- accel/accel.sh@21 -- # val= 00:08:02.286 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.286 19:08:39 -- accel/accel.sh@21 -- # val= 00:08:02.286 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.286 19:08:39 -- accel/accel.sh@21 -- # val= 00:08:02.286 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.286 19:08:39 -- accel/accel.sh@21 -- # val=0xf 00:08:02.286 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.286 19:08:39 -- accel/accel.sh@21 -- # val= 00:08:02.286 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.286 19:08:39 -- accel/accel.sh@21 -- # val= 00:08:02.286 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.286 19:08:39 -- accel/accel.sh@21 -- # val=decompress 00:08:02.286 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.286 19:08:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.286 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.287 19:08:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:02.287 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.287 19:08:39 -- accel/accel.sh@21 -- # val= 00:08:02.287 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.287 19:08:39 -- accel/accel.sh@21 -- # val=software 00:08:02.287 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.287 19:08:39 -- accel/accel.sh@23 -- # accel_module=software 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.287 19:08:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:02.287 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.287 19:08:39 -- accel/accel.sh@21 -- # val=32 00:08:02.287 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.287 19:08:39 -- accel/accel.sh@21 -- # val=32 00:08:02.287 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.287 19:08:39 -- accel/accel.sh@21 -- # val=1 00:08:02.287 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.287 19:08:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:02.287 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.287 19:08:39 -- accel/accel.sh@21 -- # val=Yes 00:08:02.287 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.287 19:08:39 -- accel/accel.sh@21 -- # val= 00:08:02.287 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:02.287 19:08:39 -- accel/accel.sh@21 -- # val= 00:08:02.287 19:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # IFS=: 00:08:02.287 19:08:39 -- accel/accel.sh@20 -- # read -r var val 00:08:03.223 [2024-02-14 19:08:40.479118] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:08:04.159 19:08:41 -- accel/accel.sh@21 -- # val= 00:08:04.159 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:08:04.159 19:08:41 -- accel/accel.sh@21 -- # val= 00:08:04.159 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:08:04.159 19:08:41 -- accel/accel.sh@21 -- # val= 00:08:04.159 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:08:04.159 19:08:41 -- accel/accel.sh@21 -- # val= 00:08:04.159 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:08:04.159 19:08:41 -- accel/accel.sh@21 -- # val= 00:08:04.159 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:08:04.159 19:08:41 -- accel/accel.sh@21 -- # val= 00:08:04.159 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:08:04.159 19:08:41 -- accel/accel.sh@21 -- # val= 00:08:04.159 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:08:04.159 19:08:41 -- accel/accel.sh@21 -- # val= 00:08:04.159 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:08:04.159 19:08:41 -- accel/accel.sh@21 -- # val= 00:08:04.159 19:08:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # IFS=: 00:08:04.159 19:08:41 -- accel/accel.sh@20 -- # read -r var val 00:08:04.159 19:08:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:04.159 19:08:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:04.159 19:08:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.159 00:08:04.159 real 0m4.888s 00:08:04.159 user 0m14.196s 00:08:04.159 sys 0m0.353s 00:08:04.159 19:08:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.159 19:08:41 -- common/autotest_common.sh@10 -- # set +x 00:08:04.159 ************************************ 00:08:04.159 END TEST accel_decomp_mcore 00:08:04.159 ************************************ 00:08:04.159 19:08:41 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:04.159 19:08:41 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:08:04.159 19:08:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:04.159 19:08:41 -- common/autotest_common.sh@10 -- # set +x 00:08:04.159 ************************************ 00:08:04.159 START TEST accel_decomp_full_mcore 00:08:04.159 ************************************ 00:08:04.159 19:08:41 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:04.159 19:08:41 -- accel/accel.sh@16 -- # local accel_opc 00:08:04.159 19:08:41 -- accel/accel.sh@17 -- # local accel_module 00:08:04.159 19:08:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:04.159 19:08:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:04.159 19:08:41 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.159 19:08:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.159 19:08:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.159 19:08:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.159 19:08:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.159 19:08:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.159 19:08:41 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.159 19:08:41 -- accel/accel.sh@42 -- # jq -r . 00:08:04.159 [2024-02-14 19:08:41.391622] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:04.160 [2024-02-14 19:08:41.391764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60750 ] 00:08:04.160 [2024-02-14 19:08:41.554621] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.418 [2024-02-14 19:08:41.739960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.418 [2024-02-14 19:08:41.740076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.418 [2024-02-14 19:08:41.740226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.418 [2024-02-14 19:08:41.740460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.418 [2024-02-14 19:08:41.740557] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:05.794 [2024-02-14 19:08:42.972253] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:08:06.730 19:08:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:06.730 00:08:06.730 SPDK Configuration: 00:08:06.730 Core mask: 0xf 00:08:06.730 00:08:06.730 Accel Perf Configuration: 00:08:06.730 Workload Type: decompress 00:08:06.730 Transfer size: 111250 bytes 00:08:06.730 Vector count 1 00:08:06.730 Module: software 00:08:06.730 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:06.730 Queue depth: 32 00:08:06.730 Allocate depth: 32 00:08:06.730 # threads/core: 1 00:08:06.730 Run time: 1 seconds 00:08:06.730 Verify: Yes 00:08:06.730 00:08:06.730 Running for 1 seconds... 00:08:06.730 00:08:06.730 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:06.730 ------------------------------------------------------------------------------------ 00:08:06.730 0,0 4192/s 173 MiB/s 0 0 00:08:06.730 3,0 4160/s 171 MiB/s 0 0 00:08:06.730 2,0 4224/s 174 MiB/s 0 0 00:08:06.730 1,0 4096/s 169 MiB/s 0 0 00:08:06.730 ==================================================================================== 00:08:06.730 Total 16672/s 1768 MiB/s 0 0' 00:08:06.730 19:08:43 -- accel/accel.sh@20 -- # IFS=: 00:08:06.730 19:08:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.730 19:08:43 -- accel/accel.sh@20 -- # read -r var val 00:08:06.730 19:08:43 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.730 19:08:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.730 19:08:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.730 19:08:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.730 19:08:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.730 19:08:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.730 19:08:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.730 19:08:43 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.730 19:08:43 -- accel/accel.sh@42 -- # jq -r . 00:08:06.730 [2024-02-14 19:08:43.846939] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:06.730 [2024-02-14 19:08:43.847388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60779 ] 00:08:06.730 [2024-02-14 19:08:44.016729] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.989 [2024-02-14 19:08:44.196291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.989 [2024-02-14 19:08:44.196390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.989 [2024-02-14 19:08:44.196564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.989 [2024-02-14 19:08:44.196760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.989 [2024-02-14 19:08:44.196818] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val= 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val= 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val= 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val=0xf 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val= 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val= 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val=decompress 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val= 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val=software 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@23 -- # accel_module=software 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val=32 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val=32 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val=1 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val=Yes 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val= 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:06.989 19:08:44 -- accel/accel.sh@21 -- # val= 00:08:06.989 19:08:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # IFS=: 00:08:06.989 19:08:44 -- accel/accel.sh@20 -- # read -r var val 00:08:08.363 [2024-02-14 19:08:45.426075] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:08:08.929 19:08:46 -- accel/accel.sh@21 -- # val= 00:08:08.929 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:08:08.929 19:08:46 -- accel/accel.sh@21 -- # val= 00:08:08.929 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:08:08.929 19:08:46 -- accel/accel.sh@21 -- # val= 00:08:08.929 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:08:08.929 19:08:46 -- accel/accel.sh@21 -- # val= 00:08:08.929 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:08:08.929 19:08:46 -- accel/accel.sh@21 -- # val= 00:08:08.929 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:08:08.929 19:08:46 -- accel/accel.sh@21 -- # val= 00:08:08.929 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:08:08.929 19:08:46 -- accel/accel.sh@21 -- # val= 00:08:08.929 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:08:08.929 19:08:46 -- accel/accel.sh@21 -- # val= 00:08:08.929 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:08:08.929 19:08:46 -- accel/accel.sh@21 -- # val= 00:08:08.929 19:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # IFS=: 00:08:08.929 19:08:46 -- accel/accel.sh@20 -- # read -r var val 00:08:08.929 19:08:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:08.929 19:08:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:08.929 19:08:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.929 00:08:08.929 real 0m4.910s 00:08:08.929 user 0m14.469s 00:08:08.929 sys 0m0.316s 00:08:08.929 ************************************ 00:08:08.929 END TEST accel_decomp_full_mcore 00:08:08.929 ************************************ 00:08:08.929 19:08:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.929 19:08:46 -- common/autotest_common.sh@10 -- # set +x 00:08:08.929 19:08:46 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:08.929 19:08:46 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:08:08.929 19:08:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:08.929 19:08:46 -- common/autotest_common.sh@10 -- # set +x 00:08:08.929 ************************************ 00:08:08.929 START TEST accel_decomp_mthread 00:08:08.929 ************************************ 00:08:08.929 19:08:46 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:08.929 19:08:46 -- accel/accel.sh@16 -- # local accel_opc 00:08:08.929 19:08:46 -- accel/accel.sh@17 -- # local accel_module 00:08:08.929 19:08:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:08.929 19:08:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:08.929 19:08:46 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.929 19:08:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.929 19:08:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.929 19:08:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.929 19:08:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.929 19:08:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.929 19:08:46 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.929 19:08:46 -- accel/accel.sh@42 -- # jq -r . 00:08:09.187 [2024-02-14 19:08:46.350755] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:09.187 [2024-02-14 19:08:46.350892] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60829 ] 00:08:09.187 [2024-02-14 19:08:46.510148] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.445 [2024-02-14 19:08:46.693169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.445 [2024-02-14 19:08:46.693288] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:10.844 [2024-02-14 19:08:47.875183] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:08:11.410 19:08:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:11.410 00:08:11.410 SPDK Configuration: 00:08:11.410 Core mask: 0x1 00:08:11.410 00:08:11.410 Accel Perf Configuration: 00:08:11.410 Workload Type: decompress 00:08:11.410 Transfer size: 4096 bytes 00:08:11.410 Vector count 1 00:08:11.410 Module: software 00:08:11.410 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.410 Queue depth: 32 00:08:11.410 Allocate depth: 32 00:08:11.410 # threads/core: 2 00:08:11.410 Run time: 1 seconds 00:08:11.410 Verify: Yes 00:08:11.410 00:08:11.410 Running for 1 seconds... 00:08:11.410 00:08:11.410 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:11.410 ------------------------------------------------------------------------------------ 00:08:11.410 0,1 29568/s 54 MiB/s 0 0 00:08:11.410 0,0 29472/s 54 MiB/s 0 0 00:08:11.410 ==================================================================================== 00:08:11.410 Total 59040/s 230 MiB/s 0 0' 00:08:11.410 19:08:48 -- accel/accel.sh@20 -- # IFS=: 00:08:11.410 19:08:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:11.410 19:08:48 -- accel/accel.sh@20 -- # read -r var val 00:08:11.410 19:08:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:11.410 19:08:48 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.410 19:08:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.410 19:08:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.410 19:08:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.410 19:08:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.410 19:08:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.410 19:08:48 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.410 19:08:48 -- accel/accel.sh@42 -- # jq -r . 00:08:11.410 [2024-02-14 19:08:48.736631] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:11.410 [2024-02-14 19:08:48.736808] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60860 ] 00:08:11.668 [2024-02-14 19:08:48.907981] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.927 [2024-02-14 19:08:49.092269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.927 [2024-02-14 19:08:49.092369] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val= 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val= 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val= 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val=0x1 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val= 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val= 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val=decompress 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val= 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val=software 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@23 -- # accel_module=software 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val=32 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val=32 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val=2 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val=Yes 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val= 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:11.927 19:08:49 -- accel/accel.sh@21 -- # val= 00:08:11.927 19:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # IFS=: 00:08:11.927 19:08:49 -- accel/accel.sh@20 -- # read -r var val 00:08:12.861 [2024-02-14 19:08:50.276312] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:08:13.796 19:08:51 -- accel/accel.sh@21 -- # val= 00:08:13.796 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:08:13.796 19:08:51 -- accel/accel.sh@21 -- # val= 00:08:13.796 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:08:13.796 19:08:51 -- accel/accel.sh@21 -- # val= 00:08:13.796 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:08:13.796 19:08:51 -- accel/accel.sh@21 -- # val= 00:08:13.796 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:08:13.796 19:08:51 -- accel/accel.sh@21 -- # val= 00:08:13.796 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:08:13.796 19:08:51 -- accel/accel.sh@21 -- # val= 00:08:13.796 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:08:13.796 19:08:51 -- accel/accel.sh@21 -- # val= 00:08:13.796 19:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # IFS=: 00:08:13.796 19:08:51 -- accel/accel.sh@20 -- # read -r var val 00:08:13.796 19:08:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:13.796 19:08:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:13.796 19:08:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.796 00:08:13.796 real 0m4.766s 00:08:13.796 user 0m4.261s 00:08:13.796 sys 0m0.294s 00:08:13.796 19:08:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.796 ************************************ 00:08:13.796 END TEST accel_decomp_mthread 00:08:13.796 ************************************ 00:08:13.796 19:08:51 -- common/autotest_common.sh@10 -- # set +x 00:08:13.796 19:08:51 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:13.796 19:08:51 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:08:13.796 19:08:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:13.796 19:08:51 -- common/autotest_common.sh@10 -- # set +x 00:08:13.796 ************************************ 00:08:13.796 START TEST accel_deomp_full_mthread 00:08:13.796 ************************************ 00:08:13.796 19:08:51 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:13.796 19:08:51 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.796 19:08:51 -- accel/accel.sh@17 -- # local accel_module 00:08:13.796 19:08:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:13.796 19:08:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:13.796 19:08:51 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.796 19:08:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.796 19:08:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.796 19:08:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.796 19:08:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.796 19:08:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.796 19:08:51 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.796 19:08:51 -- accel/accel.sh@42 -- # jq -r . 00:08:13.796 [2024-02-14 19:08:51.177500] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:13.796 [2024-02-14 19:08:51.178320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60907 ] 00:08:14.054 [2024-02-14 19:08:51.349577] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.312 [2024-02-14 19:08:51.523547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.312 [2024-02-14 19:08:51.523630] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:15.686 [2024-02-14 19:08:52.735730] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:08:16.252 19:08:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:16.252 00:08:16.252 SPDK Configuration: 00:08:16.252 Core mask: 0x1 00:08:16.252 00:08:16.252 Accel Perf Configuration: 00:08:16.252 Workload Type: decompress 00:08:16.252 Transfer size: 111250 bytes 00:08:16.252 Vector count 1 00:08:16.252 Module: software 00:08:16.252 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:16.252 Queue depth: 32 00:08:16.252 Allocate depth: 32 00:08:16.252 # threads/core: 2 00:08:16.252 Run time: 1 seconds 00:08:16.252 Verify: Yes 00:08:16.252 00:08:16.252 Running for 1 seconds... 00:08:16.252 00:08:16.252 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:16.252 ------------------------------------------------------------------------------------ 00:08:16.252 0,1 2208/s 91 MiB/s 0 0 00:08:16.252 0,0 2208/s 91 MiB/s 0 0 00:08:16.252 ==================================================================================== 00:08:16.252 Total 4416/s 468 MiB/s 0 0' 00:08:16.252 19:08:53 -- accel/accel.sh@20 -- # IFS=: 00:08:16.252 19:08:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:16.252 19:08:53 -- accel/accel.sh@20 -- # read -r var val 00:08:16.252 19:08:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:16.252 19:08:53 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.252 19:08:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:16.252 19:08:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.252 19:08:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.253 19:08:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:16.253 19:08:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:16.253 19:08:53 -- accel/accel.sh@41 -- # local IFS=, 00:08:16.253 19:08:53 -- accel/accel.sh@42 -- # jq -r . 00:08:16.253 [2024-02-14 19:08:53.595169] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:16.253 [2024-02-14 19:08:53.595343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60933 ] 00:08:16.511 [2024-02-14 19:08:53.764877] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.769 [2024-02-14 19:08:53.945947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.769 [2024-02-14 19:08:53.946046] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val= 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val= 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val= 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val=0x1 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val= 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val= 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val=decompress 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val= 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val=software 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@23 -- # accel_module=software 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val=32 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val=32 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val=2 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val=Yes 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val= 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:16.769 19:08:54 -- accel/accel.sh@21 -- # val= 00:08:16.769 19:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # IFS=: 00:08:16.769 19:08:54 -- accel/accel.sh@20 -- # read -r var val 00:08:18.143 [2024-02-14 19:08:55.170057] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:08:18.709 19:08:55 -- accel/accel.sh@21 -- # val= 00:08:18.709 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:08:18.709 19:08:55 -- accel/accel.sh@21 -- # val= 00:08:18.709 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:08:18.709 19:08:55 -- accel/accel.sh@21 -- # val= 00:08:18.709 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:08:18.709 19:08:55 -- accel/accel.sh@21 -- # val= 00:08:18.709 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:08:18.709 19:08:55 -- accel/accel.sh@21 -- # val= 00:08:18.709 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:08:18.709 19:08:55 -- accel/accel.sh@21 -- # val= 00:08:18.709 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:08:18.709 19:08:55 -- accel/accel.sh@21 -- # val= 00:08:18.709 19:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # IFS=: 00:08:18.709 19:08:55 -- accel/accel.sh@20 -- # read -r var val 00:08:18.709 19:08:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:18.709 19:08:55 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:18.709 19:08:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.709 00:08:18.709 real 0m4.830s 00:08:18.709 user 0m4.314s 00:08:18.709 sys 0m0.304s 00:08:18.709 19:08:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.709 ************************************ 00:08:18.709 END TEST accel_deomp_full_mthread 00:08:18.709 ************************************ 00:08:18.709 19:08:55 -- common/autotest_common.sh@10 -- # set +x 00:08:18.709 19:08:55 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:18.709 19:08:55 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:18.709 19:08:55 -- accel/accel.sh@129 -- # build_accel_config 00:08:18.709 19:08:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:18.709 19:08:55 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:08:18.709 19:08:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.709 19:08:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:18.709 19:08:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.709 19:08:55 -- common/autotest_common.sh@10 -- # set +x 00:08:18.709 19:08:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:18.709 19:08:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:18.709 19:08:55 -- accel/accel.sh@41 -- # local IFS=, 00:08:18.709 19:08:55 -- accel/accel.sh@42 -- # jq -r . 00:08:18.709 ************************************ 00:08:18.709 START TEST accel_dif_functional_tests 00:08:18.709 ************************************ 00:08:18.709 19:08:56 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:18.709 [2024-02-14 19:08:56.098516] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:18.709 [2024-02-14 19:08:56.098880] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60975 ] 00:08:18.967 [2024-02-14 19:08:56.267152] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.225 [2024-02-14 19:08:56.443834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.225 [2024-02-14 19:08:56.443954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.225 [2024-02-14 19:08:56.444327] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:19.225 [2024-02-14 19:08:56.443964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.483 00:08:19.483 00:08:19.483 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.483 http://cunit.sourceforge.net/ 00:08:19.483 00:08:19.483 00:08:19.483 Suite: accel_dif 00:08:19.483 Test: verify: DIF generated, GUARD check ...passed 00:08:19.483 Test: verify: DIF generated, APPTAG check ...passed 00:08:19.483 Test: verify: DIF generated, REFTAG check ...passed 00:08:19.483 Test: verify: DIF not generated, GUARD check ...[2024-02-14 19:08:56.716314] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:19.483 passed 00:08:19.483 Test: verify: DIF not generated, APPTAG check ...[2024-02-14 19:08:56.716396] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:19.483 [2024-02-14 19:08:56.716462] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:19.483 passed 00:08:19.483 Test: verify: DIF not generated, REFTAG check ...[2024-02-14 19:08:56.716670] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:19.483 passed 00:08:19.483 Test: verify: APPTAG correct, APPTAG check ...[2024-02-14 19:08:56.716736] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:19.483 [2024-02-14 19:08:56.716780] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:19.483 passed 00:08:19.483 Test: verify: APPTAG incorrect, APPTAG check ...[2024-02-14 19:08:56.717051] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:19.483 passed 00:08:19.483 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:19.483 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:19.483 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:19.483 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:08:19.483 Test: generate copy: DIF generated, GUARD check ...passed 00:08:19.483 Test: generate copy: DIF generated, APTTAG check ...[2024-02-14 19:08:56.717434] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:19.483 passed 00:08:19.483 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:19.483 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:19.483 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:19.483 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:19.483 Test: generate copy: iovecs-len validate ...passed 00:08:19.483 Test: generate copy: buffer alignment validate ...[2024-02-14 19:08:56.718173] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:19.483 passed 00:08:19.483 00:08:19.483 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.483 suites 1 1 n/a 0 0 00:08:19.483 tests 20 20 20 0 0 00:08:19.483 asserts 204 204 204 0 n/a 00:08:19.483 00:08:19.483 Elapsed time = 0.005 seconds 00:08:19.483 [2024-02-14 19:08:56.718788] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:08:20.416 00:08:20.416 real 0m1.762s 00:08:20.416 user 0m3.401s 00:08:20.416 sys 0m0.200s 00:08:20.416 ************************************ 00:08:20.416 END TEST accel_dif_functional_tests 00:08:20.416 ************************************ 00:08:20.416 19:08:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:20.416 19:08:57 -- common/autotest_common.sh@10 -- # set +x 00:08:20.416 00:08:20.416 real 1m43.250s 00:08:20.416 user 1m53.518s 00:08:20.416 sys 0m7.874s 00:08:20.416 ************************************ 00:08:20.416 END TEST accel 00:08:20.416 ************************************ 00:08:20.416 19:08:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:20.416 19:08:57 -- common/autotest_common.sh@10 -- # set +x 00:08:20.697 19:08:57 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:20.697 19:08:57 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:08:20.697 19:08:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:20.697 19:08:57 -- common/autotest_common.sh@10 -- # set +x 00:08:20.697 ************************************ 00:08:20.697 START TEST accel_rpc 00:08:20.697 ************************************ 00:08:20.697 19:08:57 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:20.697 * Looking for test storage... 00:08:20.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:20.697 19:08:57 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:20.697 19:08:57 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=61056 00:08:20.697 19:08:57 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:20.697 19:08:57 -- accel/accel_rpc.sh@15 -- # waitforlisten 61056 00:08:20.697 19:08:57 -- common/autotest_common.sh@817 -- # '[' -z 61056 ']' 00:08:20.697 19:08:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.697 19:08:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:20.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.697 19:08:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.697 19:08:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:20.697 19:08:57 -- common/autotest_common.sh@10 -- # set +x 00:08:20.697 [2024-02-14 19:08:58.052286] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:20.697 [2024-02-14 19:08:58.052475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61056 ] 00:08:20.955 [2024-02-14 19:08:58.226383] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.213 [2024-02-14 19:08:58.449921] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:21.213 [2024-02-14 19:08:58.450118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.780 19:08:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:21.780 19:08:58 -- common/autotest_common.sh@850 -- # return 0 00:08:21.780 19:08:58 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:21.780 19:08:58 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:21.780 19:08:58 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:21.780 19:08:58 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:21.780 19:08:58 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:21.780 19:08:58 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:08:21.780 19:08:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:21.780 19:08:58 -- common/autotest_common.sh@10 -- # set +x 00:08:21.780 ************************************ 00:08:21.780 START TEST accel_assign_opcode 00:08:21.780 ************************************ 00:08:21.780 19:08:58 -- common/autotest_common.sh@1102 -- # accel_assign_opcode_test_suite 00:08:21.780 19:08:58 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:21.780 19:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.780 19:08:58 -- common/autotest_common.sh@10 -- # set +x 00:08:21.780 [2024-02-14 19:08:58.951106] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:21.780 19:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.780 19:08:58 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:21.780 19:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.780 19:08:58 -- common/autotest_common.sh@10 -- # set +x 00:08:21.780 [2024-02-14 19:08:58.963078] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:21.780 19:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.780 19:08:58 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:21.780 19:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.780 19:08:58 -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 19:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.347 19:08:59 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:22.347 19:08:59 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:22.347 19:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.347 19:08:59 -- accel/accel_rpc.sh@42 -- # grep software 00:08:22.347 19:08:59 -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 19:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.347 software 00:08:22.347 ************************************ 00:08:22.347 END TEST accel_assign_opcode 00:08:22.347 ************************************ 00:08:22.347 00:08:22.347 real 0m0.694s 00:08:22.347 user 0m0.046s 00:08:22.347 sys 0m0.013s 00:08:22.347 19:08:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.347 19:08:59 -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 19:08:59 -- accel/accel_rpc.sh@55 -- # killprocess 61056 00:08:22.347 19:08:59 -- common/autotest_common.sh@924 -- # '[' -z 61056 ']' 00:08:22.347 19:08:59 -- common/autotest_common.sh@928 -- # kill -0 61056 00:08:22.347 19:08:59 -- common/autotest_common.sh@929 -- # uname 00:08:22.347 19:08:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:22.347 19:08:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 61056 00:08:22.347 killing process with pid 61056 00:08:22.347 19:08:59 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:22.347 19:08:59 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:22.347 19:08:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 61056' 00:08:22.347 19:08:59 -- common/autotest_common.sh@943 -- # kill 61056 00:08:22.347 19:08:59 -- common/autotest_common.sh@948 -- # wait 61056 00:08:24.881 00:08:24.881 real 0m3.834s 00:08:24.881 user 0m3.874s 00:08:24.881 sys 0m0.445s 00:08:24.881 19:09:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.881 ************************************ 00:08:24.881 19:09:01 -- common/autotest_common.sh@10 -- # set +x 00:08:24.881 END TEST accel_rpc 00:08:24.881 ************************************ 00:08:24.881 19:09:01 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:24.881 19:09:01 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:08:24.881 19:09:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:24.881 19:09:01 -- common/autotest_common.sh@10 -- # set +x 00:08:24.881 ************************************ 00:08:24.881 START TEST app_cmdline 00:08:24.881 ************************************ 00:08:24.881 19:09:01 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:24.881 * Looking for test storage... 00:08:24.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:24.881 19:09:01 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:24.881 19:09:01 -- app/cmdline.sh@17 -- # spdk_tgt_pid=61171 00:08:24.881 19:09:01 -- app/cmdline.sh@18 -- # waitforlisten 61171 00:08:24.881 19:09:01 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:24.881 19:09:01 -- common/autotest_common.sh@817 -- # '[' -z 61171 ']' 00:08:24.881 19:09:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.881 19:09:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:24.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.881 19:09:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.881 19:09:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:24.881 19:09:01 -- common/autotest_common.sh@10 -- # set +x 00:08:24.881 [2024-02-14 19:09:01.939808] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:24.881 [2024-02-14 19:09:01.940385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61171 ] 00:08:24.881 [2024-02-14 19:09:02.111280] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.881 [2024-02-14 19:09:02.293157] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:24.881 [2024-02-14 19:09:02.293448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.258 19:09:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:26.258 19:09:03 -- common/autotest_common.sh@850 -- # return 0 00:08:26.258 19:09:03 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:26.516 { 00:08:26.516 "version": "SPDK v24.05-pre git sha1 aa824ae66", 00:08:26.516 "fields": { 00:08:26.516 "major": 24, 00:08:26.516 "minor": 5, 00:08:26.516 "patch": 0, 00:08:26.516 "suffix": "-pre", 00:08:26.516 "commit": "aa824ae66" 00:08:26.516 } 00:08:26.516 } 00:08:26.516 19:09:03 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:26.516 19:09:03 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:26.516 19:09:03 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:26.516 19:09:03 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:26.516 19:09:03 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:26.516 19:09:03 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:26.516 19:09:03 -- app/cmdline.sh@26 -- # sort 00:08:26.516 19:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.516 19:09:03 -- common/autotest_common.sh@10 -- # set +x 00:08:26.516 19:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.775 19:09:03 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:26.775 19:09:03 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:26.775 19:09:03 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.775 19:09:03 -- common/autotest_common.sh@638 -- # local es=0 00:08:26.775 19:09:03 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.775 19:09:03 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.775 19:09:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:26.775 19:09:03 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.775 19:09:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:26.775 19:09:03 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.775 19:09:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:26.775 19:09:03 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.775 19:09:03 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:26.775 19:09:03 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:27.034 request: 00:08:27.034 { 00:08:27.034 "method": "env_dpdk_get_mem_stats", 00:08:27.034 "req_id": 1 00:08:27.034 } 00:08:27.034 Got JSON-RPC error response 00:08:27.034 response: 00:08:27.034 { 00:08:27.034 "code": -32601, 00:08:27.034 "message": "Method not found" 00:08:27.034 } 00:08:27.034 19:09:04 -- common/autotest_common.sh@641 -- # es=1 00:08:27.034 19:09:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:27.034 19:09:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:27.034 19:09:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:27.034 19:09:04 -- app/cmdline.sh@1 -- # killprocess 61171 00:08:27.034 19:09:04 -- common/autotest_common.sh@924 -- # '[' -z 61171 ']' 00:08:27.034 19:09:04 -- common/autotest_common.sh@928 -- # kill -0 61171 00:08:27.034 19:09:04 -- common/autotest_common.sh@929 -- # uname 00:08:27.034 19:09:04 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:27.034 19:09:04 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 61171 00:08:27.034 killing process with pid 61171 00:08:27.034 19:09:04 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:27.034 19:09:04 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:27.034 19:09:04 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 61171' 00:08:27.034 19:09:04 -- common/autotest_common.sh@943 -- # kill 61171 00:08:27.034 19:09:04 -- common/autotest_common.sh@948 -- # wait 61171 00:08:29.627 00:08:29.627 real 0m4.674s 00:08:29.627 user 0m5.396s 00:08:29.627 sys 0m0.530s 00:08:29.627 19:09:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.627 ************************************ 00:08:29.627 19:09:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.627 END TEST app_cmdline 00:08:29.627 ************************************ 00:08:29.627 19:09:06 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:29.627 19:09:06 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:08:29.627 19:09:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:29.627 19:09:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.627 ************************************ 00:08:29.627 START TEST version 00:08:29.627 ************************************ 00:08:29.627 19:09:06 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:29.627 * Looking for test storage... 00:08:29.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:29.627 19:09:06 -- app/version.sh@17 -- # get_header_version major 00:08:29.628 19:09:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:29.628 19:09:06 -- app/version.sh@14 -- # cut -f2 00:08:29.628 19:09:06 -- app/version.sh@14 -- # tr -d '"' 00:08:29.628 19:09:06 -- app/version.sh@17 -- # major=24 00:08:29.628 19:09:06 -- app/version.sh@18 -- # get_header_version minor 00:08:29.628 19:09:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:29.628 19:09:06 -- app/version.sh@14 -- # cut -f2 00:08:29.628 19:09:06 -- app/version.sh@14 -- # tr -d '"' 00:08:29.628 19:09:06 -- app/version.sh@18 -- # minor=5 00:08:29.628 19:09:06 -- app/version.sh@19 -- # get_header_version patch 00:08:29.628 19:09:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:29.628 19:09:06 -- app/version.sh@14 -- # cut -f2 00:08:29.628 19:09:06 -- app/version.sh@14 -- # tr -d '"' 00:08:29.628 19:09:06 -- app/version.sh@19 -- # patch=0 00:08:29.628 19:09:06 -- app/version.sh@20 -- # get_header_version suffix 00:08:29.628 19:09:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:29.628 19:09:06 -- app/version.sh@14 -- # cut -f2 00:08:29.628 19:09:06 -- app/version.sh@14 -- # tr -d '"' 00:08:29.628 19:09:06 -- app/version.sh@20 -- # suffix=-pre 00:08:29.628 19:09:06 -- app/version.sh@22 -- # version=24.5 00:08:29.628 19:09:06 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:29.628 19:09:06 -- app/version.sh@28 -- # version=24.5rc0 00:08:29.628 19:09:06 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:29.628 19:09:06 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:29.628 19:09:06 -- app/version.sh@30 -- # py_version=24.5rc0 00:08:29.628 19:09:06 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:08:29.628 00:08:29.628 real 0m0.151s 00:08:29.628 user 0m0.083s 00:08:29.628 sys 0m0.098s 00:08:29.628 ************************************ 00:08:29.628 END TEST version 00:08:29.628 ************************************ 00:08:29.628 19:09:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.628 19:09:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.628 19:09:06 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:08:29.628 19:09:06 -- spdk/autotest.sh@204 -- # uname -s 00:08:29.628 19:09:06 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:08:29.628 19:09:06 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:29.628 19:09:06 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:29.628 19:09:06 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:08:29.628 19:09:06 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:29.628 19:09:06 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:08:29.628 19:09:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:29.628 19:09:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.628 ************************************ 00:08:29.628 START TEST blockdev_nvme 00:08:29.628 ************************************ 00:08:29.628 19:09:06 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:29.628 * Looking for test storage... 00:08:29.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:29.628 19:09:06 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:29.628 19:09:06 -- bdev/nbd_common.sh@6 -- # set -e 00:08:29.628 19:09:06 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:29.628 19:09:06 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:29.628 19:09:06 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:29.628 19:09:06 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:29.628 19:09:06 -- bdev/blockdev.sh@18 -- # : 00:08:29.628 19:09:06 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:29.628 19:09:06 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:29.628 19:09:06 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:29.628 19:09:06 -- bdev/blockdev.sh@672 -- # uname -s 00:08:29.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.628 19:09:06 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:29.628 19:09:06 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:29.628 19:09:06 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:08:29.628 19:09:06 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:29.628 19:09:06 -- bdev/blockdev.sh@682 -- # dek= 00:08:29.628 19:09:06 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:29.628 19:09:06 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:29.628 19:09:06 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:29.628 19:09:06 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:08:29.628 19:09:06 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:08:29.628 19:09:06 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:29.628 19:09:06 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61348 00:08:29.628 19:09:06 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:29.628 19:09:06 -- bdev/blockdev.sh@47 -- # waitforlisten 61348 00:08:29.628 19:09:06 -- common/autotest_common.sh@817 -- # '[' -z 61348 ']' 00:08:29.628 19:09:06 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:29.628 19:09:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.628 19:09:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:29.628 19:09:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.628 19:09:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:29.628 19:09:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.628 [2024-02-14 19:09:06.884441] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:29.628 [2024-02-14 19:09:06.884816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61348 ] 00:08:29.886 [2024-02-14 19:09:07.055129] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.886 [2024-02-14 19:09:07.291237] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:29.886 [2024-02-14 19:09:07.291790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.264 19:09:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:31.264 19:09:08 -- common/autotest_common.sh@850 -- # return 0 00:08:31.264 19:09:08 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:31.264 19:09:08 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:08:31.264 19:09:08 -- bdev/blockdev.sh@79 -- # local json 00:08:31.264 19:09:08 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:31.264 19:09:08 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:31.264 19:09:08 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:31.264 19:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.264 19:09:08 -- common/autotest_common.sh@10 -- # set +x 00:08:31.523 19:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.523 19:09:08 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:31.523 19:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.523 19:09:08 -- common/autotest_common.sh@10 -- # set +x 00:08:31.523 19:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.523 19:09:08 -- bdev/blockdev.sh@738 -- # cat 00:08:31.523 19:09:08 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:31.523 19:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.523 19:09:08 -- common/autotest_common.sh@10 -- # set +x 00:08:31.523 19:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.523 19:09:08 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:31.523 19:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.523 19:09:08 -- common/autotest_common.sh@10 -- # set +x 00:08:31.783 19:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.783 19:09:08 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:31.783 19:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.783 19:09:08 -- common/autotest_common.sh@10 -- # set +x 00:08:31.783 19:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.783 19:09:08 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:31.783 19:09:08 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:31.783 19:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.783 19:09:08 -- common/autotest_common.sh@10 -- # set +x 00:08:31.783 19:09:08 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:31.783 19:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.783 19:09:09 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:31.783 19:09:09 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:31.783 19:09:09 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8f217350-be56-4653-a309-a09b658f00a2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8f217350-be56-4653-a309-a09b658f00a2",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "269f8143-583c-4123-a667-89dfff57f7ba"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "269f8143-583c-4123-a667-89dfff57f7ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b5154d19-6213-4a33-86c8-2bf81ba2224f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b5154d19-6213-4a33-86c8-2bf81ba2224f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "45a8b838-4744-495e-821b-443a93895e5c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "45a8b838-4744-495e-821b-443a93895e5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8ba73c4e-3ee3-40a4-b151-e86ebd7f54d4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8ba73c4e-3ee3-40a4-b151-e86ebd7f54d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "10457b05-8909-4641-a770-38725754b9d7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "10457b05-8909-4641-a770-38725754b9d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:31.783 19:09:09 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:31.783 19:09:09 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:08:31.783 19:09:09 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:31.783 19:09:09 -- bdev/blockdev.sh@752 -- # killprocess 61348 00:08:31.783 19:09:09 -- common/autotest_common.sh@924 -- # '[' -z 61348 ']' 00:08:31.783 19:09:09 -- common/autotest_common.sh@928 -- # kill -0 61348 00:08:31.783 19:09:09 -- common/autotest_common.sh@929 -- # uname 00:08:31.783 19:09:09 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:31.783 19:09:09 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 61348 00:08:31.783 killing process with pid 61348 00:08:31.783 19:09:09 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:31.783 19:09:09 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:31.783 19:09:09 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 61348' 00:08:31.783 19:09:09 -- common/autotest_common.sh@943 -- # kill 61348 00:08:31.783 19:09:09 -- common/autotest_common.sh@948 -- # wait 61348 00:08:34.317 19:09:11 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:34.317 19:09:11 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:34.317 19:09:11 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:08:34.317 19:09:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:34.317 19:09:11 -- common/autotest_common.sh@10 -- # set +x 00:08:34.317 ************************************ 00:08:34.317 START TEST bdev_hello_world 00:08:34.317 ************************************ 00:08:34.317 19:09:11 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:34.317 [2024-02-14 19:09:11.423049] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:34.317 [2024-02-14 19:09:11.423459] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61446 ] 00:08:34.317 [2024-02-14 19:09:11.594858] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.576 [2024-02-14 19:09:11.791761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.577 [2024-02-14 19:09:11.792060] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:35.144 [2024-02-14 19:09:12.403547] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:35.144 [2024-02-14 19:09:12.403823] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:35.144 [2024-02-14 19:09:12.403898] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:35.144 [2024-02-14 19:09:12.406931] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:35.144 [2024-02-14 19:09:12.407589] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:35.144 [2024-02-14 19:09:12.407627] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:35.144 [2024-02-14 19:09:12.407865] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:35.144 00:08:35.144 [2024-02-14 19:09:12.407901] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:35.144 [2024-02-14 19:09:12.407960] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:08:36.522 00:08:36.522 real 0m2.227s 00:08:36.522 user 0m1.889s 00:08:36.522 sys 0m0.226s 00:08:36.522 19:09:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:36.522 ************************************ 00:08:36.522 END TEST bdev_hello_world 00:08:36.522 ************************************ 00:08:36.522 19:09:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.522 19:09:13 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:36.522 19:09:13 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:08:36.522 19:09:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:36.522 19:09:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.522 ************************************ 00:08:36.522 START TEST bdev_bounds 00:08:36.522 ************************************ 00:08:36.522 19:09:13 -- common/autotest_common.sh@1102 -- # bdev_bounds '' 00:08:36.522 Process bdevio pid: 61494 00:08:36.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.522 19:09:13 -- bdev/blockdev.sh@288 -- # bdevio_pid=61494 00:08:36.522 19:09:13 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:36.522 19:09:13 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61494' 00:08:36.522 19:09:13 -- bdev/blockdev.sh@291 -- # waitforlisten 61494 00:08:36.522 19:09:13 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:36.522 19:09:13 -- common/autotest_common.sh@817 -- # '[' -z 61494 ']' 00:08:36.522 19:09:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.522 19:09:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:36.522 19:09:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.522 19:09:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:36.522 19:09:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.522 [2024-02-14 19:09:13.699001] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:36.522 [2024-02-14 19:09:13.699164] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61494 ] 00:08:36.522 [2024-02-14 19:09:13.871895] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:36.781 [2024-02-14 19:09:14.070180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.781 [2024-02-14 19:09:14.070319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.781 [2024-02-14 19:09:14.070640] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:36.781 [2024-02-14 19:09:14.070337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.197 19:09:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:38.197 19:09:15 -- common/autotest_common.sh@850 -- # return 0 00:08:38.197 19:09:15 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:38.197 I/O targets: 00:08:38.197 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:38.197 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:38.197 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:38.197 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:38.197 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:38.197 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:38.197 00:08:38.197 00:08:38.197 CUnit - A unit testing framework for C - Version 2.1-3 00:08:38.197 http://cunit.sourceforge.net/ 00:08:38.197 00:08:38.197 00:08:38.197 Suite: bdevio tests on: Nvme3n1 00:08:38.197 Test: blockdev write read block ...passed 00:08:38.197 Test: blockdev write zeroes read block ...passed 00:08:38.197 Test: blockdev write zeroes read no split ...passed 00:08:38.197 Test: blockdev write zeroes read split ...passed 00:08:38.197 Test: blockdev write zeroes read split partial ...passed 00:08:38.197 Test: blockdev reset ...[2024-02-14 19:09:15.596139] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:38.197 passed 00:08:38.197 Test: blockdev write read 8 blocks ...[2024-02-14 19:09:15.599755] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.197 passed 00:08:38.197 Test: blockdev write read size > 128k ...passed 00:08:38.197 Test: blockdev write read invalid size ...passed 00:08:38.197 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.197 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.197 Test: blockdev write read max offset ...passed 00:08:38.197 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.197 Test: blockdev writev readv 8 blocks ...passed 00:08:38.197 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.197 Test: blockdev writev readv block ...passed 00:08:38.197 Test: blockdev writev readv size > 128k ...passed 00:08:38.197 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.197 Test: blockdev comparev and writev ...[2024-02-14 19:09:15.607958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29780e000 len:0x1000 00:08:38.197 [2024-02-14 19:09:15.608035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:38.197 passed 00:08:38.197 Test: blockdev nvme passthru rw ...passed 00:08:38.197 Test: blockdev nvme passthru vendor specific ...passed 00:08:38.197 Test: blockdev nvme admin passthru ...[2024-02-14 19:09:15.608793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:38.197 [2024-02-14 19:09:15.608841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:38.457 passed 00:08:38.457 Test: blockdev copy ...passed 00:08:38.457 Suite: bdevio tests on: Nvme2n3 00:08:38.457 Test: blockdev write read block ...passed 00:08:38.457 Test: blockdev write zeroes read block ...passed 00:08:38.457 Test: blockdev write zeroes read no split ...passed 00:08:38.457 Test: blockdev write zeroes read split ...passed 00:08:38.457 Test: blockdev write zeroes read split partial ...passed 00:08:38.457 Test: blockdev reset ...[2024-02-14 19:09:15.672030] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:38.457 [2024-02-14 19:09:15.675874] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.457 passed 00:08:38.457 Test: blockdev write read 8 blocks ...passed 00:08:38.457 Test: blockdev write read size > 128k ...passed 00:08:38.457 Test: blockdev write read invalid size ...passed 00:08:38.457 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.457 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.457 Test: blockdev write read max offset ...passed 00:08:38.457 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.457 Test: blockdev writev readv 8 blocks ...passed 00:08:38.457 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.457 Test: blockdev writev readv block ...passed 00:08:38.457 Test: blockdev writev readv size > 128k ...passed 00:08:38.457 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.457 Test: blockdev comparev and writev ...[2024-02-14 19:09:15.684341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29780a000 len:0x1000 00:08:38.457 [2024-02-14 19:09:15.684402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:38.457 passed 00:08:38.457 Test: blockdev nvme passthru rw ...passed 00:08:38.457 Test: blockdev nvme passthru vendor specific ...passed 00:08:38.457 Test: blockdev nvme admin passthru ...[2024-02-14 19:09:15.685178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:38.457 [2024-02-14 19:09:15.685224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:38.457 passed 00:08:38.457 Test: blockdev copy ...passed 00:08:38.457 Suite: bdevio tests on: Nvme2n2 00:08:38.457 Test: blockdev write read block ...passed 00:08:38.457 Test: blockdev write zeroes read block ...passed 00:08:38.457 Test: blockdev write zeroes read no split ...passed 00:08:38.457 Test: blockdev write zeroes read split ...passed 00:08:38.457 Test: blockdev write zeroes read split partial ...passed 00:08:38.457 Test: blockdev reset ...[2024-02-14 19:09:15.744479] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:38.457 [2024-02-14 19:09:15.748314] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.457 passed 00:08:38.457 Test: blockdev write read 8 blocks ...passed 00:08:38.457 Test: blockdev write read size > 128k ...passed 00:08:38.457 Test: blockdev write read invalid size ...passed 00:08:38.457 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.457 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.457 Test: blockdev write read max offset ...passed 00:08:38.457 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.457 Test: blockdev writev readv 8 blocks ...passed 00:08:38.457 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.457 Test: blockdev writev readv block ...passed 00:08:38.457 Test: blockdev writev readv size > 128k ...passed 00:08:38.457 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.457 Test: blockdev comparev and writev ...[2024-02-14 19:09:15.757579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28ee06000 len:0x1000 00:08:38.457 [2024-02-14 19:09:15.757637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:38.457 passed 00:08:38.457 Test: blockdev nvme passthru rw ...passed 00:08:38.457 Test: blockdev nvme passthru vendor specific ...passed 00:08:38.457 Test: blockdev nvme admin passthru ...[2024-02-14 19:09:15.758508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:38.457 [2024-02-14 19:09:15.758569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:38.457 passed 00:08:38.457 Test: blockdev copy ...passed 00:08:38.457 Suite: bdevio tests on: Nvme2n1 00:08:38.457 Test: blockdev write read block ...passed 00:08:38.457 Test: blockdev write zeroes read block ...passed 00:08:38.457 Test: blockdev write zeroes read no split ...passed 00:08:38.457 Test: blockdev write zeroes read split ...passed 00:08:38.457 Test: blockdev write zeroes read split partial ...passed 00:08:38.457 Test: blockdev reset ...[2024-02-14 19:09:15.821241] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:38.457 [2024-02-14 19:09:15.825176] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.457 passed 00:08:38.457 Test: blockdev write read 8 blocks ...passed 00:08:38.457 Test: blockdev write read size > 128k ...passed 00:08:38.457 Test: blockdev write read invalid size ...passed 00:08:38.457 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.457 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.457 Test: blockdev write read max offset ...passed 00:08:38.457 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.457 Test: blockdev writev readv 8 blocks ...passed 00:08:38.457 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.457 Test: blockdev writev readv block ...passed 00:08:38.457 Test: blockdev writev readv size > 128k ...passed 00:08:38.457 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.457 Test: blockdev comparev and writev ...[2024-02-14 19:09:15.834301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28ee01000 len:0x1000 00:08:38.457 [2024-02-14 19:09:15.834374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:38.457 passed 00:08:38.457 Test: blockdev nvme passthru rw ...passed 00:08:38.457 Test: blockdev nvme passthru vendor specific ...passed 00:08:38.457 Test: blockdev nvme admin passthru ...[2024-02-14 19:09:15.835329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:38.457 [2024-02-14 19:09:15.835380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:38.457 passed 00:08:38.457 Test: blockdev copy ...passed 00:08:38.457 Suite: bdevio tests on: Nvme1n1 00:08:38.457 Test: blockdev write read block ...passed 00:08:38.457 Test: blockdev write zeroes read block ...passed 00:08:38.457 Test: blockdev write zeroes read no split ...passed 00:08:38.717 Test: blockdev write zeroes read split ...passed 00:08:38.717 Test: blockdev write zeroes read split partial ...passed 00:08:38.717 Test: blockdev reset ...[2024-02-14 19:09:15.914665] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:38.717 [2024-02-14 19:09:15.918183] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.717 passed 00:08:38.717 Test: blockdev write read 8 blocks ...passed 00:08:38.717 Test: blockdev write read size > 128k ...passed 00:08:38.717 Test: blockdev write read invalid size ...passed 00:08:38.717 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.717 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.717 Test: blockdev write read max offset ...passed 00:08:38.717 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.717 Test: blockdev writev readv 8 blocks ...passed 00:08:38.717 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.717 Test: blockdev writev readv block ...passed 00:08:38.717 Test: blockdev writev readv size > 128k ...passed 00:08:38.717 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.717 Test: blockdev comparev and writev ...[2024-02-14 19:09:15.927245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x289e06000 len:0x1000 00:08:38.717 [2024-02-14 19:09:15.927334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:38.717 passed 00:08:38.717 Test: blockdev nvme passthru rw ...passed 00:08:38.717 Test: blockdev nvme passthru vendor specific ...[2024-02-14 19:09:15.928305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:38.717 [2024-02-14 19:09:15.928361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:38.717 passed 00:08:38.717 Test: blockdev nvme admin passthru ...passed 00:08:38.717 Test: blockdev copy ...passed 00:08:38.717 Suite: bdevio tests on: Nvme0n1 00:08:38.717 Test: blockdev write read block ...passed 00:08:38.717 Test: blockdev write zeroes read block ...passed 00:08:38.717 Test: blockdev write zeroes read no split ...passed 00:08:38.717 Test: blockdev write zeroes read split ...passed 00:08:38.717 Test: blockdev write zeroes read split partial ...passed 00:08:38.717 Test: blockdev reset ...[2024-02-14 19:09:16.004125] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:38.717 [2024-02-14 19:09:16.007455] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.717 passed 00:08:38.717 Test: blockdev write read 8 blocks ...passed 00:08:38.717 Test: blockdev write read size > 128k ...passed 00:08:38.717 Test: blockdev write read invalid size ...passed 00:08:38.717 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.717 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.717 Test: blockdev write read max offset ...passed 00:08:38.717 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.717 Test: blockdev writev readv 8 blocks ...passed 00:08:38.717 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.717 Test: blockdev writev readv block ...passed 00:08:38.717 Test: blockdev writev readv size > 128k ...passed 00:08:38.717 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.717 Test: blockdev comparev and writev ...passed 00:08:38.717 Test: blockdev nvme passthru rw ...[2024-02-14 19:09:16.016136] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:38.717 separate metadata which is not supported yet. 00:08:38.717 passed 00:08:38.717 Test: blockdev nvme passthru vendor specific ...passed 00:08:38.717 Test: blockdev nvme admin passthru ...[2024-02-14 19:09:16.016755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:38.717 [2024-02-14 19:09:16.016810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:38.717 passed 00:08:38.717 Test: blockdev copy ...passed 00:08:38.717 00:08:38.717 Run Summary: Type Total Ran Passed Failed Inactive 00:08:38.717 suites 6 6 n/a 0 0 00:08:38.717 tests 138 138 138 0 0 00:08:38.717 asserts 893 893 893 0 n/a 00:08:38.717 00:08:38.717 Elapsed time = 1.318 seconds 00:08:38.717 0 00:08:38.717 19:09:16 -- bdev/blockdev.sh@293 -- # killprocess 61494 00:08:38.717 19:09:16 -- common/autotest_common.sh@924 -- # '[' -z 61494 ']' 00:08:38.717 19:09:16 -- common/autotest_common.sh@928 -- # kill -0 61494 00:08:38.717 19:09:16 -- common/autotest_common.sh@929 -- # uname 00:08:38.717 19:09:16 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:38.717 19:09:16 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 61494 00:08:38.717 19:09:16 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:38.717 19:09:16 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:38.717 19:09:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 61494' 00:08:38.717 killing process with pid 61494 00:08:38.717 19:09:16 -- common/autotest_common.sh@943 -- # kill 61494 00:08:38.717 [2024-02-14 19:09:16.071177] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk 19:09:16 -- common/autotest_common.sh@948 -- # wait 61494 00:08:38.717 _subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:08:39.653 19:09:16 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:39.653 00:08:39.653 real 0m3.366s 00:08:39.653 user 0m8.954s 00:08:39.653 sys 0m0.396s 00:08:39.653 19:09:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:39.653 19:09:16 -- common/autotest_common.sh@10 -- # set +x 00:08:39.653 ************************************ 00:08:39.653 END TEST bdev_bounds 00:08:39.653 ************************************ 00:08:39.653 19:09:17 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:39.653 19:09:17 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:08:39.653 19:09:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:39.653 19:09:17 -- common/autotest_common.sh@10 -- # set +x 00:08:39.653 ************************************ 00:08:39.653 START TEST bdev_nbd 00:08:39.653 ************************************ 00:08:39.653 19:09:17 -- common/autotest_common.sh@1102 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:39.653 19:09:17 -- bdev/blockdev.sh@298 -- # uname -s 00:08:39.653 19:09:17 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:39.653 19:09:17 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.653 19:09:17 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:39.653 19:09:17 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:39.653 19:09:17 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:39.653 19:09:17 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:08:39.653 19:09:17 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:39.653 19:09:17 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:39.653 19:09:17 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:39.653 19:09:17 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:08:39.653 19:09:17 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:39.653 19:09:17 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:39.653 19:09:17 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:39.653 19:09:17 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:39.653 19:09:17 -- bdev/blockdev.sh@316 -- # nbd_pid=61561 00:08:39.653 19:09:17 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:39.653 19:09:17 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:39.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:39.653 19:09:17 -- bdev/blockdev.sh@318 -- # waitforlisten 61561 /var/tmp/spdk-nbd.sock 00:08:39.653 19:09:17 -- common/autotest_common.sh@817 -- # '[' -z 61561 ']' 00:08:39.653 19:09:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:39.653 19:09:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:39.653 19:09:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:39.653 19:09:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:39.653 19:09:17 -- common/autotest_common.sh@10 -- # set +x 00:08:39.912 [2024-02-14 19:09:17.111462] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:39.912 [2024-02-14 19:09:17.111664] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.912 [2024-02-14 19:09:17.274437] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.170 [2024-02-14 19:09:17.449519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.170 [2024-02-14 19:09:17.449656] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:40.737 19:09:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:40.737 19:09:18 -- common/autotest_common.sh@850 -- # return 0 00:08:40.737 19:09:18 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:40.737 19:09:18 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.737 19:09:18 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:40.737 19:09:18 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:40.737 19:09:18 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:40.737 19:09:18 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.737 19:09:18 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:40.737 19:09:18 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:40.737 19:09:18 -- bdev/nbd_common.sh@24 -- # local i 00:08:40.737 19:09:18 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:40.737 19:09:18 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:40.737 19:09:18 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:40.737 19:09:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:40.995 19:09:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:40.996 19:09:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:40.996 19:09:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:40.996 19:09:18 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:08:40.996 19:09:18 -- common/autotest_common.sh@855 -- # local i 00:08:40.996 19:09:18 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:40.996 19:09:18 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:40.996 19:09:18 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:08:40.996 19:09:18 -- common/autotest_common.sh@859 -- # break 00:08:40.996 19:09:18 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:40.996 19:09:18 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:40.996 19:09:18 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.996 1+0 records in 00:08:40.996 1+0 records out 00:08:40.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532382 s, 7.7 MB/s 00:08:40.996 19:09:18 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.996 19:09:18 -- common/autotest_common.sh@872 -- # size=4096 00:08:40.996 19:09:18 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.996 19:09:18 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:40.996 19:09:18 -- common/autotest_common.sh@875 -- # return 0 00:08:40.996 19:09:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:40.996 19:09:18 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:40.996 19:09:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:41.254 19:09:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:41.254 19:09:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:41.254 19:09:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:41.254 19:09:18 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:08:41.254 19:09:18 -- common/autotest_common.sh@855 -- # local i 00:08:41.254 19:09:18 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:41.254 19:09:18 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:41.254 19:09:18 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:08:41.254 19:09:18 -- common/autotest_common.sh@859 -- # break 00:08:41.254 19:09:18 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:41.254 19:09:18 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:41.254 19:09:18 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.254 1+0 records in 00:08:41.254 1+0 records out 00:08:41.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572039 s, 7.2 MB/s 00:08:41.254 19:09:18 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.254 19:09:18 -- common/autotest_common.sh@872 -- # size=4096 00:08:41.254 19:09:18 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.254 19:09:18 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:41.254 19:09:18 -- common/autotest_common.sh@875 -- # return 0 00:08:41.254 19:09:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:41.254 19:09:18 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:41.254 19:09:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:41.512 19:09:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:41.512 19:09:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:41.512 19:09:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:41.512 19:09:18 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:08:41.512 19:09:18 -- common/autotest_common.sh@855 -- # local i 00:08:41.512 19:09:18 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:41.512 19:09:18 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:41.512 19:09:18 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:08:41.512 19:09:18 -- common/autotest_common.sh@859 -- # break 00:08:41.512 19:09:18 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:41.512 19:09:18 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:41.512 19:09:18 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.512 1+0 records in 00:08:41.512 1+0 records out 00:08:41.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553341 s, 7.4 MB/s 00:08:41.512 19:09:18 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.512 19:09:18 -- common/autotest_common.sh@872 -- # size=4096 00:08:41.512 19:09:18 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.512 19:09:18 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:41.512 19:09:18 -- common/autotest_common.sh@875 -- # return 0 00:08:41.512 19:09:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:41.512 19:09:18 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:41.512 19:09:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:41.770 19:09:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:41.770 19:09:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:41.770 19:09:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:41.770 19:09:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:08:41.770 19:09:19 -- common/autotest_common.sh@855 -- # local i 00:08:41.770 19:09:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:41.770 19:09:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:41.770 19:09:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:08:41.770 19:09:19 -- common/autotest_common.sh@859 -- # break 00:08:41.770 19:09:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:41.770 19:09:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:41.770 19:09:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.770 1+0 records in 00:08:41.770 1+0 records out 00:08:41.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585784 s, 7.0 MB/s 00:08:41.770 19:09:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.770 19:09:19 -- common/autotest_common.sh@872 -- # size=4096 00:08:41.770 19:09:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.770 19:09:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:41.770 19:09:19 -- common/autotest_common.sh@875 -- # return 0 00:08:41.770 19:09:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:41.770 19:09:19 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:41.770 19:09:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:42.028 19:09:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:42.028 19:09:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:42.028 19:09:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:42.028 19:09:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:08:42.028 19:09:19 -- common/autotest_common.sh@855 -- # local i 00:08:42.028 19:09:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:42.028 19:09:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:42.028 19:09:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:08:42.028 19:09:19 -- common/autotest_common.sh@859 -- # break 00:08:42.028 19:09:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:42.028 19:09:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:42.028 19:09:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.028 1+0 records in 00:08:42.028 1+0 records out 00:08:42.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662234 s, 6.2 MB/s 00:08:42.028 19:09:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.028 19:09:19 -- common/autotest_common.sh@872 -- # size=4096 00:08:42.028 19:09:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.028 19:09:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:42.028 19:09:19 -- common/autotest_common.sh@875 -- # return 0 00:08:42.028 19:09:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.028 19:09:19 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:42.028 19:09:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:42.594 19:09:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:42.594 19:09:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:42.594 19:09:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:42.594 19:09:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:08:42.594 19:09:19 -- common/autotest_common.sh@855 -- # local i 00:08:42.594 19:09:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:42.594 19:09:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:42.594 19:09:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:08:42.594 19:09:19 -- common/autotest_common.sh@859 -- # break 00:08:42.594 19:09:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:42.594 19:09:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:42.594 19:09:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.594 1+0 records in 00:08:42.594 1+0 records out 00:08:42.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000783917 s, 5.2 MB/s 00:08:42.594 19:09:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.594 19:09:19 -- common/autotest_common.sh@872 -- # size=4096 00:08:42.594 19:09:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.594 19:09:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:42.594 19:09:19 -- common/autotest_common.sh@875 -- # return 0 00:08:42.594 19:09:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.594 19:09:19 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:42.594 19:09:19 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:42.594 19:09:20 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:42.594 { 00:08:42.594 "nbd_device": "/dev/nbd0", 00:08:42.594 "bdev_name": "Nvme0n1" 00:08:42.594 }, 00:08:42.594 { 00:08:42.594 "nbd_device": "/dev/nbd1", 00:08:42.594 "bdev_name": "Nvme1n1" 00:08:42.594 }, 00:08:42.594 { 00:08:42.594 "nbd_device": "/dev/nbd2", 00:08:42.594 "bdev_name": "Nvme2n1" 00:08:42.594 }, 00:08:42.594 { 00:08:42.594 "nbd_device": "/dev/nbd3", 00:08:42.594 "bdev_name": "Nvme2n2" 00:08:42.594 }, 00:08:42.594 { 00:08:42.594 "nbd_device": "/dev/nbd4", 00:08:42.594 "bdev_name": "Nvme2n3" 00:08:42.594 }, 00:08:42.594 { 00:08:42.594 "nbd_device": "/dev/nbd5", 00:08:42.594 "bdev_name": "Nvme3n1" 00:08:42.594 } 00:08:42.594 ]' 00:08:42.594 19:09:20 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:42.594 19:09:20 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:42.594 19:09:20 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:42.594 { 00:08:42.594 "nbd_device": "/dev/nbd0", 00:08:42.594 "bdev_name": "Nvme0n1" 00:08:42.594 }, 00:08:42.594 { 00:08:42.594 "nbd_device": "/dev/nbd1", 00:08:42.594 "bdev_name": "Nvme1n1" 00:08:42.594 }, 00:08:42.594 { 00:08:42.594 "nbd_device": "/dev/nbd2", 00:08:42.594 "bdev_name": "Nvme2n1" 00:08:42.594 }, 00:08:42.594 { 00:08:42.594 "nbd_device": "/dev/nbd3", 00:08:42.594 "bdev_name": "Nvme2n2" 00:08:42.594 }, 00:08:42.594 { 00:08:42.594 "nbd_device": "/dev/nbd4", 00:08:42.594 "bdev_name": "Nvme2n3" 00:08:42.595 }, 00:08:42.595 { 00:08:42.595 "nbd_device": "/dev/nbd5", 00:08:42.595 "bdev_name": "Nvme3n1" 00:08:42.595 } 00:08:42.595 ]' 00:08:42.853 19:09:20 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:42.853 19:09:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.853 19:09:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:42.853 19:09:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:42.853 19:09:20 -- bdev/nbd_common.sh@51 -- # local i 00:08:42.853 19:09:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.853 19:09:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:43.112 19:09:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:43.112 19:09:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:43.112 19:09:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:43.112 19:09:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.112 19:09:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.112 19:09:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:43.112 19:09:20 -- bdev/nbd_common.sh@41 -- # break 00:08:43.112 19:09:20 -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.112 19:09:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.112 19:09:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:43.371 19:09:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:43.371 19:09:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:43.371 19:09:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:43.371 19:09:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.371 19:09:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.371 19:09:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:43.371 19:09:20 -- bdev/nbd_common.sh@41 -- # break 00:08:43.371 19:09:20 -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.371 19:09:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.371 19:09:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:43.629 19:09:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:43.629 19:09:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:43.629 19:09:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:43.629 19:09:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.629 19:09:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.629 19:09:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:43.629 19:09:20 -- bdev/nbd_common.sh@41 -- # break 00:08:43.629 19:09:20 -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.629 19:09:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.629 19:09:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:43.887 19:09:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:43.887 19:09:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:43.887 19:09:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:43.887 19:09:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.887 19:09:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.887 19:09:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:43.887 19:09:21 -- bdev/nbd_common.sh@41 -- # break 00:08:43.887 19:09:21 -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.887 19:09:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.887 19:09:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:44.160 19:09:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:44.161 19:09:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:44.161 19:09:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:44.161 19:09:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.161 19:09:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.161 19:09:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:44.161 19:09:21 -- bdev/nbd_common.sh@41 -- # break 00:08:44.161 19:09:21 -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.161 19:09:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.161 19:09:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:44.442 19:09:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:44.442 19:09:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:44.442 19:09:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:44.442 19:09:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.442 19:09:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.442 19:09:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:44.442 19:09:21 -- bdev/nbd_common.sh@41 -- # break 00:08:44.442 19:09:21 -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.442 19:09:21 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:44.442 19:09:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.442 19:09:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:44.700 19:09:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:44.700 19:09:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:44.700 19:09:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@65 -- # true 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@65 -- # count=0 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@122 -- # count=0 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@127 -- # return 0 00:08:44.700 19:09:22 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@12 -- # local i 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:44.700 19:09:22 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:44.701 19:09:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:44.959 /dev/nbd0 00:08:44.959 19:09:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:44.959 19:09:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:44.959 19:09:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:08:44.959 19:09:22 -- common/autotest_common.sh@855 -- # local i 00:08:44.959 19:09:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:44.959 19:09:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:44.959 19:09:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:08:44.959 19:09:22 -- common/autotest_common.sh@859 -- # break 00:08:44.959 19:09:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:44.959 19:09:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:44.959 19:09:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.959 1+0 records in 00:08:44.959 1+0 records out 00:08:44.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444932 s, 9.2 MB/s 00:08:44.959 19:09:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.959 19:09:22 -- common/autotest_common.sh@872 -- # size=4096 00:08:44.959 19:09:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.959 19:09:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:44.959 19:09:22 -- common/autotest_common.sh@875 -- # return 0 00:08:44.959 19:09:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.959 19:09:22 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:44.959 19:09:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:45.217 /dev/nbd1 00:08:45.217 19:09:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:45.217 19:09:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:45.217 19:09:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:08:45.217 19:09:22 -- common/autotest_common.sh@855 -- # local i 00:08:45.217 19:09:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:45.217 19:09:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:45.217 19:09:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:08:45.217 19:09:22 -- common/autotest_common.sh@859 -- # break 00:08:45.217 19:09:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:45.217 19:09:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:45.217 19:09:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.217 1+0 records in 00:08:45.217 1+0 records out 00:08:45.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497489 s, 8.2 MB/s 00:08:45.217 19:09:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.217 19:09:22 -- common/autotest_common.sh@872 -- # size=4096 00:08:45.217 19:09:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.217 19:09:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:45.217 19:09:22 -- common/autotest_common.sh@875 -- # return 0 00:08:45.217 19:09:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.217 19:09:22 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:45.217 19:09:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:45.477 /dev/nbd10 00:08:45.477 19:09:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:45.477 19:09:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:45.477 19:09:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:08:45.477 19:09:22 -- common/autotest_common.sh@855 -- # local i 00:08:45.477 19:09:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:45.477 19:09:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:45.477 19:09:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:08:45.477 19:09:22 -- common/autotest_common.sh@859 -- # break 00:08:45.477 19:09:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:45.477 19:09:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:45.477 19:09:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.477 1+0 records in 00:08:45.477 1+0 records out 00:08:45.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556316 s, 7.4 MB/s 00:08:45.477 19:09:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.477 19:09:22 -- common/autotest_common.sh@872 -- # size=4096 00:08:45.477 19:09:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.477 19:09:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:45.477 19:09:22 -- common/autotest_common.sh@875 -- # return 0 00:08:45.477 19:09:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.477 19:09:22 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:45.477 19:09:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:45.736 /dev/nbd11 00:08:45.736 19:09:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:45.736 19:09:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:45.736 19:09:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:08:45.736 19:09:23 -- common/autotest_common.sh@855 -- # local i 00:08:45.736 19:09:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:45.736 19:09:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:45.736 19:09:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:08:45.736 19:09:23 -- common/autotest_common.sh@859 -- # break 00:08:45.736 19:09:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:45.736 19:09:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:45.736 19:09:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.736 1+0 records in 00:08:45.736 1+0 records out 00:08:45.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642091 s, 6.4 MB/s 00:08:45.736 19:09:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.736 19:09:23 -- common/autotest_common.sh@872 -- # size=4096 00:08:45.736 19:09:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.736 19:09:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:45.736 19:09:23 -- common/autotest_common.sh@875 -- # return 0 00:08:45.736 19:09:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.736 19:09:23 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:45.736 19:09:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:45.994 /dev/nbd12 00:08:45.994 19:09:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:45.994 19:09:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:45.994 19:09:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:08:45.994 19:09:23 -- common/autotest_common.sh@855 -- # local i 00:08:45.994 19:09:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:45.994 19:09:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:45.994 19:09:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:08:45.994 19:09:23 -- common/autotest_common.sh@859 -- # break 00:08:45.994 19:09:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:45.994 19:09:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:45.994 19:09:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.994 1+0 records in 00:08:45.994 1+0 records out 00:08:45.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069578 s, 5.9 MB/s 00:08:45.994 19:09:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.994 19:09:23 -- common/autotest_common.sh@872 -- # size=4096 00:08:45.994 19:09:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.994 19:09:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:45.994 19:09:23 -- common/autotest_common.sh@875 -- # return 0 00:08:45.994 19:09:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.994 19:09:23 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:45.994 19:09:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:46.253 /dev/nbd13 00:08:46.253 19:09:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:46.253 19:09:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:46.253 19:09:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:08:46.253 19:09:23 -- common/autotest_common.sh@855 -- # local i 00:08:46.253 19:09:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:46.253 19:09:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:46.253 19:09:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:08:46.253 19:09:23 -- common/autotest_common.sh@859 -- # break 00:08:46.253 19:09:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:46.253 19:09:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:46.253 19:09:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.253 1+0 records in 00:08:46.253 1+0 records out 00:08:46.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834021 s, 4.9 MB/s 00:08:46.253 19:09:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.253 19:09:23 -- common/autotest_common.sh@872 -- # size=4096 00:08:46.253 19:09:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.253 19:09:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:46.253 19:09:23 -- common/autotest_common.sh@875 -- # return 0 00:08:46.253 19:09:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.253 19:09:23 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:46.253 19:09:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:46.253 19:09:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.253 19:09:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:46.512 { 00:08:46.512 "nbd_device": "/dev/nbd0", 00:08:46.512 "bdev_name": "Nvme0n1" 00:08:46.512 }, 00:08:46.512 { 00:08:46.512 "nbd_device": "/dev/nbd1", 00:08:46.512 "bdev_name": "Nvme1n1" 00:08:46.512 }, 00:08:46.512 { 00:08:46.512 "nbd_device": "/dev/nbd10", 00:08:46.512 "bdev_name": "Nvme2n1" 00:08:46.512 }, 00:08:46.512 { 00:08:46.512 "nbd_device": "/dev/nbd11", 00:08:46.512 "bdev_name": "Nvme2n2" 00:08:46.512 }, 00:08:46.512 { 00:08:46.512 "nbd_device": "/dev/nbd12", 00:08:46.512 "bdev_name": "Nvme2n3" 00:08:46.512 }, 00:08:46.512 { 00:08:46.512 "nbd_device": "/dev/nbd13", 00:08:46.512 "bdev_name": "Nvme3n1" 00:08:46.512 } 00:08:46.512 ]' 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:46.512 { 00:08:46.512 "nbd_device": "/dev/nbd0", 00:08:46.512 "bdev_name": "Nvme0n1" 00:08:46.512 }, 00:08:46.512 { 00:08:46.512 "nbd_device": "/dev/nbd1", 00:08:46.512 "bdev_name": "Nvme1n1" 00:08:46.512 }, 00:08:46.512 { 00:08:46.512 "nbd_device": "/dev/nbd10", 00:08:46.512 "bdev_name": "Nvme2n1" 00:08:46.512 }, 00:08:46.512 { 00:08:46.512 "nbd_device": "/dev/nbd11", 00:08:46.512 "bdev_name": "Nvme2n2" 00:08:46.512 }, 00:08:46.512 { 00:08:46.512 "nbd_device": "/dev/nbd12", 00:08:46.512 "bdev_name": "Nvme2n3" 00:08:46.512 }, 00:08:46.512 { 00:08:46.512 "nbd_device": "/dev/nbd13", 00:08:46.512 "bdev_name": "Nvme3n1" 00:08:46.512 } 00:08:46.512 ]' 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:46.512 /dev/nbd1 00:08:46.512 /dev/nbd10 00:08:46.512 /dev/nbd11 00:08:46.512 /dev/nbd12 00:08:46.512 /dev/nbd13' 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:46.512 /dev/nbd1 00:08:46.512 /dev/nbd10 00:08:46.512 /dev/nbd11 00:08:46.512 /dev/nbd12 00:08:46.512 /dev/nbd13' 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@65 -- # count=6 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@66 -- # echo 6 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@95 -- # count=6 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:46.512 256+0 records in 00:08:46.512 256+0 records out 00:08:46.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00652878 s, 161 MB/s 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.512 19:09:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:46.771 256+0 records in 00:08:46.771 256+0 records out 00:08:46.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151993 s, 6.9 MB/s 00:08:46.771 19:09:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.771 19:09:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:47.029 256+0 records in 00:08:47.029 256+0 records out 00:08:47.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153758 s, 6.8 MB/s 00:08:47.029 19:09:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.029 19:09:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:47.029 256+0 records in 00:08:47.029 256+0 records out 00:08:47.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146792 s, 7.1 MB/s 00:08:47.029 19:09:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.029 19:09:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:47.287 256+0 records in 00:08:47.287 256+0 records out 00:08:47.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143898 s, 7.3 MB/s 00:08:47.287 19:09:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.287 19:09:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:47.287 256+0 records in 00:08:47.287 256+0 records out 00:08:47.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162103 s, 6.5 MB/s 00:08:47.287 19:09:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.287 19:09:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:47.545 256+0 records in 00:08:47.545 256+0 records out 00:08:47.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161908 s, 6.5 MB/s 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:47.545 19:09:24 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:47.546 19:09:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:47.546 19:09:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.546 19:09:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:47.546 19:09:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:47.546 19:09:24 -- bdev/nbd_common.sh@51 -- # local i 00:08:47.546 19:09:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.546 19:09:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:47.804 19:09:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:47.804 19:09:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:47.804 19:09:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:47.804 19:09:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.804 19:09:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.804 19:09:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:47.804 19:09:25 -- bdev/nbd_common.sh@41 -- # break 00:08:47.804 19:09:25 -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.804 19:09:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.804 19:09:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:48.062 19:09:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:48.320 19:09:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:48.320 19:09:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:48.320 19:09:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.320 19:09:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.320 19:09:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:48.320 19:09:25 -- bdev/nbd_common.sh@41 -- # break 00:08:48.320 19:09:25 -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.320 19:09:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.320 19:09:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:48.578 19:09:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:48.578 19:09:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:48.578 19:09:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:48.578 19:09:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.578 19:09:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.578 19:09:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:48.578 19:09:25 -- bdev/nbd_common.sh@41 -- # break 00:08:48.578 19:09:25 -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.578 19:09:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.578 19:09:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:48.836 19:09:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:48.836 19:09:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:48.836 19:09:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:48.836 19:09:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.836 19:09:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.836 19:09:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:48.836 19:09:26 -- bdev/nbd_common.sh@41 -- # break 00:08:48.836 19:09:26 -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.836 19:09:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.836 19:09:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:49.095 19:09:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:49.095 19:09:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:49.095 19:09:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:49.095 19:09:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.095 19:09:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.095 19:09:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:49.095 19:09:26 -- bdev/nbd_common.sh@41 -- # break 00:08:49.095 19:09:26 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.095 19:09:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.095 19:09:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:49.353 19:09:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:49.353 19:09:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:49.353 19:09:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:49.353 19:09:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.353 19:09:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.353 19:09:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:49.353 19:09:26 -- bdev/nbd_common.sh@41 -- # break 00:08:49.353 19:09:26 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.353 19:09:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:49.353 19:09:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.353 19:09:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.611 19:09:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:49.611 19:09:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:49.611 19:09:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.611 19:09:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:49.611 19:09:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:49.611 19:09:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.611 19:09:26 -- bdev/nbd_common.sh@65 -- # true 00:08:49.611 19:09:26 -- bdev/nbd_common.sh@65 -- # count=0 00:08:49.611 19:09:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:49.611 19:09:26 -- bdev/nbd_common.sh@104 -- # count=0 00:08:49.611 19:09:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:49.612 19:09:26 -- bdev/nbd_common.sh@109 -- # return 0 00:08:49.612 19:09:26 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:49.612 19:09:26 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.612 19:09:26 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:49.612 19:09:26 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:49.612 19:09:26 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:49.612 19:09:26 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:49.870 malloc_lvol_verify 00:08:49.870 19:09:27 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:50.128 6862d56f-d002-4c28-943d-44f551c290f7 00:08:50.128 19:09:27 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:50.387 e99fad60-78cb-4cf8-a01c-0692e28ce4aa 00:08:50.387 19:09:27 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:50.645 /dev/nbd0 00:08:50.645 19:09:28 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:50.645 mke2fs 1.46.5 (30-Dec-2021) 00:08:50.645 Discarding device blocks: 0/4096 done 00:08:50.645 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:50.645 00:08:50.645 Allocating group tables: 0/1 done 00:08:50.645 Writing inode tables: 0/1 done 00:08:50.645 Creating journal (1024 blocks): done 00:08:50.645 Writing superblocks and filesystem accounting information: 0/1 done 00:08:50.645 00:08:50.645 19:09:28 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:50.645 19:09:28 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:50.645 19:09:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.645 19:09:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:50.645 19:09:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:50.645 19:09:28 -- bdev/nbd_common.sh@51 -- # local i 00:08:50.645 19:09:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.645 19:09:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:50.903 19:09:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:50.903 19:09:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:50.903 19:09:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:50.903 19:09:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.903 19:09:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.903 19:09:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:50.903 19:09:28 -- bdev/nbd_common.sh@41 -- # break 00:08:50.903 19:09:28 -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.903 19:09:28 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:50.903 19:09:28 -- bdev/nbd_common.sh@147 -- # return 0 00:08:50.903 19:09:28 -- bdev/blockdev.sh@324 -- # killprocess 61561 00:08:50.903 19:09:28 -- common/autotest_common.sh@924 -- # '[' -z 61561 ']' 00:08:50.903 19:09:28 -- common/autotest_common.sh@928 -- # kill -0 61561 00:08:50.903 19:09:28 -- common/autotest_common.sh@929 -- # uname 00:08:50.903 19:09:28 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:50.903 19:09:28 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 61561 00:08:51.161 19:09:28 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:51.161 19:09:28 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:51.161 killing process with pid 61561 00:08:51.161 19:09:28 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 61561' 00:08:51.161 19:09:28 -- common/autotest_common.sh@943 -- # kill 61561 00:08:51.161 [2024-02-14 19:09:28.328007] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:08:51.161 19:09:28 -- common/autotest_common.sh@948 -- # wait 61561 00:08:52.539 19:09:29 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:52.539 00:08:52.539 real 0m12.499s 00:08:52.539 user 0m17.903s 00:08:52.539 sys 0m3.846s 00:08:52.539 19:09:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:52.539 19:09:29 -- common/autotest_common.sh@10 -- # set +x 00:08:52.539 ************************************ 00:08:52.539 END TEST bdev_nbd 00:08:52.539 ************************************ 00:08:52.539 19:09:29 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:52.539 19:09:29 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:08:52.539 skipping fio tests on NVMe due to multi-ns failures. 00:08:52.539 19:09:29 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:52.539 19:09:29 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:52.539 19:09:29 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:52.539 19:09:29 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:08:52.539 19:09:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:52.539 19:09:29 -- common/autotest_common.sh@10 -- # set +x 00:08:52.539 ************************************ 00:08:52.539 START TEST bdev_verify 00:08:52.539 ************************************ 00:08:52.539 19:09:29 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:52.539 [2024-02-14 19:09:29.672724] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:52.539 [2024-02-14 19:09:29.672881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61962 ] 00:08:52.540 [2024-02-14 19:09:29.849313] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:52.798 [2024-02-14 19:09:30.084519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.799 [2024-02-14 19:09:30.084545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.799 [2024-02-14 19:09:30.084937] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:08:53.366 Running I/O for 5 seconds... 00:08:58.639 00:08:58.639 Latency(us) 00:08:58.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.639 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.639 Verification LBA range: start 0x0 length 0xbd0bd 00:08:58.639 Nvme0n1 : 5.04 2945.94 11.51 0.00 0.00 43319.31 7030.23 48854.11 00:08:58.639 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.639 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:58.639 Nvme0n1 : 5.05 2923.16 11.42 0.00 0.00 43543.33 6494.02 45517.73 00:08:58.639 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.639 Verification LBA range: start 0x0 length 0xa0000 00:08:58.639 Nvme1n1 : 5.04 2949.89 11.52 0.00 0.00 43245.85 4230.05 45517.73 00:08:58.639 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.639 Verification LBA range: start 0xa0000 length 0xa0000 00:08:58.639 Nvme1n1 : 5.05 2921.26 11.41 0.00 0.00 43523.93 8877.15 45041.11 00:08:58.639 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.639 Verification LBA range: start 0x0 length 0x80000 00:08:58.639 Nvme2n1 : 5.04 2948.92 11.52 0.00 0.00 43184.29 4915.20 38844.97 00:08:58.639 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.639 Verification LBA range: start 0x80000 length 0x80000 00:08:58.639 Nvme2n1 : 5.05 2920.41 11.41 0.00 0.00 43495.18 9532.51 45041.11 00:08:58.639 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.639 Verification LBA range: start 0x0 length 0x80000 00:08:58.639 Nvme2n2 : 5.05 2948.09 11.52 0.00 0.00 43157.66 5570.56 37415.10 00:08:58.639 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.639 Verification LBA range: start 0x80000 length 0x80000 00:08:58.639 Nvme2n2 : 5.06 2918.69 11.40 0.00 0.00 43471.73 11617.75 44802.79 00:08:58.639 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.639 Verification LBA range: start 0x0 length 0x80000 00:08:58.639 Nvme2n3 : 5.05 2946.09 11.51 0.00 0.00 43136.07 7923.90 36223.53 00:08:58.639 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.639 Verification LBA range: start 0x80000 length 0x80000 00:08:58.639 Nvme2n3 : 5.04 2919.65 11.40 0.00 0.00 43700.77 6166.34 49807.36 00:08:58.639 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:58.639 Verification LBA range: start 0x0 length 0x20000 00:08:58.639 Nvme3n1 : 5.05 2952.03 11.53 0.00 0.00 43046.11 938.36 35508.60 00:08:58.639 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:58.639 Verification LBA range: start 0x20000 length 0x20000 00:08:58.639 Nvme3n1 : 5.05 2925.20 11.43 0.00 0.00 43614.24 4051.32 46709.29 00:08:58.639 =================================================================================================================== 00:08:58.639 Total : 35219.33 137.58 0.00 0.00 43368.98 938.36 49807.36 00:08:58.639 [2024-02-14 19:09:35.857318] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:08.616 00:09:08.616 real 0m15.987s 00:09:08.616 user 0m30.484s 00:09:08.616 sys 0m0.371s 00:09:08.616 19:09:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:08.616 19:09:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.616 ************************************ 00:09:08.616 END TEST bdev_verify 00:09:08.616 ************************************ 00:09:08.616 19:09:45 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:08.616 19:09:45 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:09:08.616 19:09:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:08.616 19:09:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.616 ************************************ 00:09:08.616 START TEST bdev_verify_big_io 00:09:08.616 ************************************ 00:09:08.616 19:09:45 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:08.616 [2024-02-14 19:09:45.702950] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:08.616 [2024-02-14 19:09:45.703090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62151 ] 00:09:08.616 [2024-02-14 19:09:45.861861] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:08.874 [2024-02-14 19:09:46.045178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.875 [2024-02-14 19:09:46.045193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.875 [2024-02-14 19:09:46.045569] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:09.441 Running I/O for 5 seconds... 00:09:16.006 00:09:16.006 Latency(us) 00:09:16.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.006 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:16.006 Verification LBA range: start 0x0 length 0xbd0b 00:09:16.006 Nvme0n1 : 5.37 233.44 14.59 0.00 0.00 531241.10 95325.09 754974.72 00:09:16.006 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:16.006 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:16.006 Nvme0n1 : 5.34 285.33 17.83 0.00 0.00 440894.57 48854.11 602454.57 00:09:16.006 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:16.006 Verification LBA range: start 0x0 length 0xa000 00:09:16.006 Nvme1n1 : 5.39 240.52 15.03 0.00 0.00 513055.15 21686.46 686340.65 00:09:16.006 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:16.006 Verification LBA range: start 0xa000 length 0xa000 00:09:16.006 Nvme1n1 : 5.34 285.21 17.83 0.00 0.00 435694.23 49569.05 552885.53 00:09:16.006 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:16.006 Verification LBA range: start 0x0 length 0x8000 00:09:16.006 Nvme2n1 : 5.41 249.47 15.59 0.00 0.00 489927.44 12094.37 617706.59 00:09:16.006 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:16.006 Verification LBA range: start 0x8000 length 0x8000 00:09:16.006 Nvme2n1 : 5.35 285.10 17.82 0.00 0.00 430234.30 50045.67 507129.48 00:09:16.006 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:16.006 Verification LBA range: start 0x0 length 0x8000 00:09:16.006 Nvme2n2 : 5.41 249.38 15.59 0.00 0.00 481204.65 12571.00 545259.52 00:09:16.006 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:16.006 Verification LBA range: start 0x8000 length 0x8000 00:09:16.006 Nvme2n2 : 5.38 291.51 18.22 0.00 0.00 416700.34 25380.31 451840.93 00:09:16.006 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:16.006 Verification LBA range: start 0x0 length 0x8000 00:09:16.006 Nvme2n3 : 5.42 256.46 16.03 0.00 0.00 460129.28 11856.06 467092.95 00:09:16.006 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:16.006 Verification LBA range: start 0x8000 length 0x8000 00:09:16.006 Nvme2n3 : 5.38 299.08 18.69 0.00 0.00 402545.74 4379.00 402271.88 00:09:16.006 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:16.006 Verification LBA range: start 0x0 length 0x2000 00:09:16.006 Nvme3n1 : 5.47 294.52 18.41 0.00 0.00 395458.59 491.52 467092.95 00:09:16.006 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:16.006 Verification LBA range: start 0x2000 length 0x2000 00:09:16.006 Nvme3n1 : 5.39 307.58 19.22 0.00 0.00 386775.36 7179.17 404178.39 00:09:16.006 =================================================================================================================== 00:09:16.006 Total : 3277.60 204.85 0.00 0.00 444706.90 491.52 754974.72 00:09:16.006 [2024-02-14 19:09:52.483003] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:16.574 00:09:16.574 real 0m8.134s 00:09:16.574 user 0m15.011s 00:09:16.574 sys 0m0.265s 00:09:16.574 19:09:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:16.574 ************************************ 00:09:16.574 END TEST bdev_verify_big_io 00:09:16.574 ************************************ 00:09:16.574 19:09:53 -- common/autotest_common.sh@10 -- # set +x 00:09:16.574 19:09:53 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:16.574 19:09:53 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:09:16.574 19:09:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:16.574 19:09:53 -- common/autotest_common.sh@10 -- # set +x 00:09:16.574 ************************************ 00:09:16.574 START TEST bdev_write_zeroes 00:09:16.574 ************************************ 00:09:16.574 19:09:53 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:16.574 [2024-02-14 19:09:53.888967] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:16.574 [2024-02-14 19:09:53.889181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62262 ] 00:09:16.833 [2024-02-14 19:09:54.047655] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.833 [2024-02-14 19:09:54.219867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.833 [2024-02-14 19:09:54.219993] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:17.768 Running I/O for 1 seconds... 00:09:18.702 00:09:18.702 Latency(us) 00:09:18.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.702 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.702 Nvme0n1 : 1.02 8686.61 33.93 0.00 0.00 14684.24 11260.28 28001.75 00:09:18.702 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.702 Nvme1n1 : 1.02 8673.07 33.88 0.00 0.00 14685.64 11915.64 27882.59 00:09:18.702 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.702 Nvme2n1 : 1.02 8659.84 33.83 0.00 0.00 14629.19 11558.17 25141.99 00:09:18.702 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.702 Nvme2n2 : 1.02 8697.48 33.97 0.00 0.00 14558.59 8996.31 23473.80 00:09:18.702 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.702 Nvme2n3 : 1.02 8684.24 33.92 0.00 0.00 14529.95 9353.77 20971.52 00:09:18.702 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:18.702 Nvme3n1 : 1.03 8671.33 33.87 0.00 0.00 14508.62 9711.24 20018.27 00:09:18.702 =================================================================================================================== 00:09:18.702 Total : 52072.57 203.41 0.00 0.00 14599.13 8996.31 28001.75 00:09:18.703 [2024-02-14 19:09:55.874310] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:19.640 ************************************ 00:09:19.640 END TEST bdev_write_zeroes 00:09:19.640 ************************************ 00:09:19.640 00:09:19.640 real 0m3.235s 00:09:19.640 user 0m2.912s 00:09:19.640 sys 0m0.200s 00:09:19.640 19:09:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.640 19:09:57 -- common/autotest_common.sh@10 -- # set +x 00:09:19.899 19:09:57 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:19.899 19:09:57 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:09:19.899 19:09:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:19.899 19:09:57 -- common/autotest_common.sh@10 -- # set +x 00:09:19.899 ************************************ 00:09:19.899 START TEST bdev_json_nonenclosed 00:09:19.899 ************************************ 00:09:19.899 19:09:57 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:19.899 [2024-02-14 19:09:57.191085] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:19.899 [2024-02-14 19:09:57.191270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62315 ] 00:09:20.158 [2024-02-14 19:09:57.362185] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.158 [2024-02-14 19:09:57.541302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.158 [2024-02-14 19:09:57.541423] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:20.158 [2024-02-14 19:09:57.541573] json_config.c: 598:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:20.158 [2024-02-14 19:09:57.541602] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:20.158 [2024-02-14 19:09:57.541617] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:20.158 [2024-02-14 19:09:57.541656] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:20.726 00:09:20.726 real 0m0.847s 00:09:20.726 user 0m0.614s 00:09:20.726 sys 0m0.126s 00:09:20.726 19:09:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:20.726 19:09:57 -- common/autotest_common.sh@10 -- # set +x 00:09:20.726 ************************************ 00:09:20.726 END TEST bdev_json_nonenclosed 00:09:20.726 ************************************ 00:09:20.726 19:09:57 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:20.726 19:09:57 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:09:20.726 19:09:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:20.726 19:09:57 -- common/autotest_common.sh@10 -- # set +x 00:09:20.726 ************************************ 00:09:20.726 START TEST bdev_json_nonarray 00:09:20.726 ************************************ 00:09:20.726 19:09:57 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:20.726 [2024-02-14 19:09:58.071003] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:20.726 [2024-02-14 19:09:58.071159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62346 ] 00:09:20.985 [2024-02-14 19:09:58.228330] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.244 [2024-02-14 19:09:58.465864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.244 [2024-02-14 19:09:58.465994] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:21.244 [2024-02-14 19:09:58.466127] json_config.c: 604:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:21.244 [2024-02-14 19:09:58.466154] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:21.244 [2024-02-14 19:09:58.466168] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:21.244 [2024-02-14 19:09:58.466208] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:21.502 00:09:21.502 real 0m0.852s 00:09:21.502 user 0m0.626s 00:09:21.502 sys 0m0.120s 00:09:21.502 19:09:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.502 19:09:58 -- common/autotest_common.sh@10 -- # set +x 00:09:21.502 ************************************ 00:09:21.502 END TEST bdev_json_nonarray 00:09:21.502 ************************************ 00:09:21.502 19:09:58 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:09:21.502 19:09:58 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:09:21.502 19:09:58 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:09:21.502 19:09:58 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:21.502 19:09:58 -- bdev/blockdev.sh@809 -- # cleanup 00:09:21.502 19:09:58 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:21.502 19:09:58 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:21.502 19:09:58 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:09:21.502 19:09:58 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:09:21.502 19:09:58 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:09:21.502 19:09:58 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:09:21.502 00:09:21.502 real 0m52.203s 00:09:21.502 user 1m23.504s 00:09:21.502 sys 0m6.403s 00:09:21.502 19:09:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.502 ************************************ 00:09:21.502 END TEST blockdev_nvme 00:09:21.502 19:09:58 -- common/autotest_common.sh@10 -- # set +x 00:09:21.502 ************************************ 00:09:21.761 19:09:58 -- spdk/autotest.sh@219 -- # uname -s 00:09:21.761 19:09:58 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:09:21.761 19:09:58 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:21.761 19:09:58 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:09:21.761 19:09:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:21.761 19:09:58 -- common/autotest_common.sh@10 -- # set +x 00:09:21.761 ************************************ 00:09:21.761 START TEST blockdev_nvme_gpt 00:09:21.761 ************************************ 00:09:21.761 19:09:58 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:21.761 * Looking for test storage... 00:09:21.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:21.761 19:09:58 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:21.761 19:09:58 -- bdev/nbd_common.sh@6 -- # set -e 00:09:21.761 19:09:58 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:21.761 19:09:58 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:21.761 19:09:58 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:21.761 19:09:59 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:21.761 19:09:59 -- bdev/blockdev.sh@18 -- # : 00:09:21.761 19:09:59 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:09:21.761 19:09:59 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:09:21.761 19:09:59 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:09:21.761 19:09:59 -- bdev/blockdev.sh@672 -- # uname -s 00:09:21.761 19:09:59 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:09:21.761 19:09:59 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:09:21.761 19:09:59 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:09:21.761 19:09:59 -- bdev/blockdev.sh@681 -- # crypto_device= 00:09:21.761 19:09:59 -- bdev/blockdev.sh@682 -- # dek= 00:09:21.761 19:09:59 -- bdev/blockdev.sh@683 -- # env_ctx= 00:09:21.761 19:09:59 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:09:21.761 19:09:59 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:09:21.761 19:09:59 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:09:21.761 19:09:59 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:09:21.761 19:09:59 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:09:21.761 19:09:59 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62421 00:09:21.761 19:09:59 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:21.761 19:09:59 -- bdev/blockdev.sh@47 -- # waitforlisten 62421 00:09:21.761 19:09:59 -- common/autotest_common.sh@817 -- # '[' -z 62421 ']' 00:09:21.761 19:09:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.761 19:09:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:21.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.761 19:09:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.761 19:09:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:21.761 19:09:59 -- common/autotest_common.sh@10 -- # set +x 00:09:21.761 19:09:59 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:21.761 [2024-02-14 19:09:59.122847] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:21.761 [2024-02-14 19:09:59.123044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62421 ] 00:09:22.019 [2024-02-14 19:09:59.292827] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.277 [2024-02-14 19:09:59.473247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:22.277 [2024-02-14 19:09:59.473534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.652 19:10:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:23.652 19:10:00 -- common/autotest_common.sh@850 -- # return 0 00:09:23.652 19:10:00 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:09:23.652 19:10:00 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:09:23.652 19:10:00 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:23.920 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:23.920 Waiting for block devices as requested 00:09:24.178 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.178 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.178 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.437 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.708 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:29.708 19:10:06 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:09:29.708 19:10:06 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:09:29.708 19:10:06 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:09:29.708 19:10:06 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:09:29.708 19:10:06 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:09:29.708 19:10:06 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0c0n1 00:09:29.708 19:10:06 -- common/autotest_common.sh@1645 -- # local device=nvme0c0n1 00:09:29.708 19:10:06 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:09:29.708 19:10:06 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:09:29.708 19:10:06 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:09:29.708 19:10:06 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:09:29.708 19:10:06 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:09:29.708 19:10:06 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:29.708 19:10:06 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:09:29.708 19:10:06 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:09:29.708 19:10:06 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:09:29.708 19:10:06 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:09:29.708 19:10:06 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:29.708 19:10:06 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:09:29.709 19:10:06 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:09:29.709 19:10:06 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n2 00:09:29.709 19:10:06 -- common/autotest_common.sh@1645 -- # local device=nvme1n2 00:09:29.709 19:10:06 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:29.709 19:10:06 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:09:29.709 19:10:06 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:09:29.709 19:10:06 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n3 00:09:29.709 19:10:06 -- common/autotest_common.sh@1645 -- # local device=nvme1n3 00:09:29.709 19:10:06 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:29.709 19:10:06 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:09:29.709 19:10:06 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:09:29.709 19:10:06 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme2n1 00:09:29.709 19:10:06 -- common/autotest_common.sh@1645 -- # local device=nvme2n1 00:09:29.709 19:10:06 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:29.709 19:10:06 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:09:29.709 19:10:06 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:09:29.709 19:10:06 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme3n1 00:09:29.709 19:10:06 -- common/autotest_common.sh@1645 -- # local device=nvme3n1 00:09:29.709 19:10:06 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:29.709 19:10:06 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:09:29.709 19:10:06 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:09:29.709 19:10:06 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:09:29.709 19:10:06 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:09:29.709 19:10:06 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:29.709 19:10:06 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:09:29.709 19:10:06 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:09:29.709 19:10:06 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:09:29.709 19:10:06 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:09:29.709 BYT; 00:09:29.709 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:29.709 19:10:06 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:09:29.709 BYT; 00:09:29.709 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:29.709 19:10:06 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:09:29.709 19:10:06 -- bdev/blockdev.sh@114 -- # break 00:09:29.709 19:10:06 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:09:29.709 19:10:06 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:29.709 19:10:06 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:29.709 19:10:06 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:29.709 19:10:06 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:09:29.709 19:10:06 -- scripts/common.sh@410 -- # local spdk_guid 00:09:29.709 19:10:06 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:29.709 19:10:06 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:29.709 19:10:06 -- scripts/common.sh@415 -- # IFS='()' 00:09:29.709 19:10:06 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:09:29.709 19:10:06 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:29.709 19:10:06 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:29.709 19:10:06 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:29.709 19:10:06 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:29.709 19:10:06 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:29.709 19:10:06 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:09:29.709 19:10:06 -- scripts/common.sh@422 -- # local spdk_guid 00:09:29.709 19:10:06 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:29.709 19:10:06 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:29.709 19:10:06 -- scripts/common.sh@427 -- # IFS='()' 00:09:29.709 19:10:06 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:09:29.709 19:10:06 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:29.709 19:10:06 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:29.709 19:10:06 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:29.709 19:10:06 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:29.709 19:10:06 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:29.709 19:10:06 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:09:30.646 The operation has completed successfully. 00:09:30.646 19:10:07 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:09:31.582 The operation has completed successfully. 00:09:31.582 19:10:08 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:32.517 lsblk: /dev/nvme0c0n1: not a block device 00:09:32.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.776 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.776 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.776 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.776 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.035 19:10:10 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:09:33.035 19:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.035 19:10:10 -- common/autotest_common.sh@10 -- # set +x 00:09:33.035 [] 00:09:33.035 19:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.035 19:10:10 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:09:33.035 19:10:10 -- bdev/blockdev.sh@79 -- # local json 00:09:33.035 19:10:10 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:09:33.035 19:10:10 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:33.035 19:10:10 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:09:33.035 19:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.035 19:10:10 -- common/autotest_common.sh@10 -- # set +x 00:09:33.294 19:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.294 19:10:10 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:33.294 19:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.294 19:10:10 -- common/autotest_common.sh@10 -- # set +x 00:09:33.294 19:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.294 19:10:10 -- bdev/blockdev.sh@738 -- # cat 00:09:33.294 19:10:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:33.294 19:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.294 19:10:10 -- common/autotest_common.sh@10 -- # set +x 00:09:33.294 19:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.294 19:10:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:33.294 19:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.294 19:10:10 -- common/autotest_common.sh@10 -- # set +x 00:09:33.294 19:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.294 19:10:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:33.294 19:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.294 19:10:10 -- common/autotest_common.sh@10 -- # set +x 00:09:33.294 19:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.294 19:10:10 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:33.294 19:10:10 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:33.294 19:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.294 19:10:10 -- common/autotest_common.sh@10 -- # set +x 00:09:33.294 19:10:10 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:33.555 19:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.555 19:10:10 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:33.555 19:10:10 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:33.555 19:10:10 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f55b595a-4c8f-4745-91c2-2b7c02c29fa5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f55b595a-4c8f-4745-91c2-2b7c02c29fa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0157e433-7ec2-48b8-8843-05401860d50a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0157e433-7ec2-48b8-8843-05401860d50a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7639fb90-6787-4200-8e7a-e389b98bb381"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7639fb90-6787-4200-8e7a-e389b98bb381",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a6ec5e3e-47c9-4116-b465-842af8ed91d7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a6ec5e3e-47c9-4116-b465-842af8ed91d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "aa422ec3-1e4e-4e97-8e8b-f6b92a08363a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "aa422ec3-1e4e-4e97-8e8b-f6b92a08363a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:33.555 19:10:10 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:33.555 19:10:10 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:09:33.555 19:10:10 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:33.555 19:10:10 -- bdev/blockdev.sh@752 -- # killprocess 62421 00:09:33.555 19:10:10 -- common/autotest_common.sh@924 -- # '[' -z 62421 ']' 00:09:33.555 19:10:10 -- common/autotest_common.sh@928 -- # kill -0 62421 00:09:33.555 19:10:10 -- common/autotest_common.sh@929 -- # uname 00:09:33.555 19:10:10 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:09:33.555 19:10:10 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 62421 00:09:33.555 19:10:10 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:09:33.555 19:10:10 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:09:33.555 killing process with pid 62421 00:09:33.555 19:10:10 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 62421' 00:09:33.555 19:10:10 -- common/autotest_common.sh@943 -- # kill 62421 00:09:33.555 19:10:10 -- common/autotest_common.sh@948 -- # wait 62421 00:09:36.100 19:10:12 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:36.101 19:10:12 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:36.101 19:10:12 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:09:36.101 19:10:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:36.101 19:10:12 -- common/autotest_common.sh@10 -- # set +x 00:09:36.101 ************************************ 00:09:36.101 START TEST bdev_hello_world 00:09:36.101 ************************************ 00:09:36.101 19:10:12 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:36.101 [2024-02-14 19:10:12.984690] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:36.101 [2024-02-14 19:10:12.984846] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63117 ] 00:09:36.101 [2024-02-14 19:10:13.154423] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.101 [2024-02-14 19:10:13.343432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.101 [2024-02-14 19:10:13.343543] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:36.671 [2024-02-14 19:10:13.941352] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:36.671 [2024-02-14 19:10:13.941423] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:36.671 [2024-02-14 19:10:13.941452] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:36.671 [2024-02-14 19:10:13.944447] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:36.671 [2024-02-14 19:10:13.945005] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:36.671 [2024-02-14 19:10:13.945045] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:36.671 [2024-02-14 19:10:13.945273] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:36.671 00:09:36.671 [2024-02-14 19:10:13.945322] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:36.671 [2024-02-14 19:10:13.945388] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:38.048 00:09:38.048 real 0m2.166s 00:09:38.048 user 0m1.841s 00:09:38.048 sys 0m0.214s 00:09:38.048 19:10:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:38.048 ************************************ 00:09:38.048 END TEST bdev_hello_world 00:09:38.048 ************************************ 00:09:38.048 19:10:15 -- common/autotest_common.sh@10 -- # set +x 00:09:38.048 19:10:15 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:38.048 19:10:15 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:09:38.048 19:10:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:38.048 19:10:15 -- common/autotest_common.sh@10 -- # set +x 00:09:38.048 ************************************ 00:09:38.048 START TEST bdev_bounds 00:09:38.048 ************************************ 00:09:38.048 19:10:15 -- common/autotest_common.sh@1102 -- # bdev_bounds '' 00:09:38.048 19:10:15 -- bdev/blockdev.sh@288 -- # bdevio_pid=63161 00:09:38.048 19:10:15 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:38.048 19:10:15 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:38.048 Process bdevio pid: 63161 00:09:38.048 19:10:15 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 63161' 00:09:38.048 19:10:15 -- bdev/blockdev.sh@291 -- # waitforlisten 63161 00:09:38.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.048 19:10:15 -- common/autotest_common.sh@817 -- # '[' -z 63161 ']' 00:09:38.048 19:10:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.048 19:10:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:38.048 19:10:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.048 19:10:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:38.048 19:10:15 -- common/autotest_common.sh@10 -- # set +x 00:09:38.048 [2024-02-14 19:10:15.185545] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:38.048 [2024-02-14 19:10:15.186192] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63161 ] 00:09:38.048 [2024-02-14 19:10:15.351361] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:38.307 [2024-02-14 19:10:15.538757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.307 [2024-02-14 19:10:15.538856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.307 [2024-02-14 19:10:15.538864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.307 [2024-02-14 19:10:15.539244] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:39.683 19:10:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:39.683 19:10:16 -- common/autotest_common.sh@850 -- # return 0 00:09:39.683 19:10:16 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:39.683 I/O targets: 00:09:39.683 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:39.683 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:39.683 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:39.683 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:39.683 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:39.683 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:39.683 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:39.683 00:09:39.683 00:09:39.683 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.683 http://cunit.sourceforge.net/ 00:09:39.683 00:09:39.683 00:09:39.683 Suite: bdevio tests on: Nvme3n1 00:09:39.683 Test: blockdev write read block ...passed 00:09:39.683 Test: blockdev write zeroes read block ...passed 00:09:39.683 Test: blockdev write zeroes read no split ...passed 00:09:39.683 Test: blockdev write zeroes read split ...passed 00:09:39.683 Test: blockdev write zeroes read split partial ...passed 00:09:39.942 Test: blockdev reset ...[2024-02-14 19:10:17.101109] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:39.942 passed 00:09:39.942 Test: blockdev write read 8 blocks ...[2024-02-14 19:10:17.104791] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:39.942 passed 00:09:39.942 Test: blockdev write read size > 128k ...passed 00:09:39.942 Test: blockdev write read invalid size ...passed 00:09:39.942 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:39.942 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:39.942 Test: blockdev write read max offset ...passed 00:09:39.942 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:39.942 Test: blockdev writev readv 8 blocks ...passed 00:09:39.942 Test: blockdev writev readv 30 x 1block ...passed 00:09:39.942 Test: blockdev writev readv block ...passed 00:09:39.942 Test: blockdev writev readv size > 128k ...passed 00:09:39.942 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:39.942 Test: blockdev comparev and writev ...[2024-02-14 19:10:17.112227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28e00a000 len:0x1000 00:09:39.942 [2024-02-14 19:10:17.112288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:39.942 passed 00:09:39.942 Test: blockdev nvme passthru rw ...passed 00:09:39.942 Test: blockdev nvme passthru vendor specific ...passed 00:09:39.942 Test: blockdev nvme admin passthru ...[2024-02-14 19:10:17.113068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:39.942 [2024-02-14 19:10:17.113103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:39.942 passed 00:09:39.942 Test: blockdev copy ...passed 00:09:39.942 Suite: bdevio tests on: Nvme2n3 00:09:39.942 Test: blockdev write read block ...passed 00:09:39.942 Test: blockdev write zeroes read block ...passed 00:09:39.942 Test: blockdev write zeroes read no split ...passed 00:09:39.942 Test: blockdev write zeroes read split ...passed 00:09:39.942 Test: blockdev write zeroes read split partial ...passed 00:09:39.942 Test: blockdev reset ...[2024-02-14 19:10:17.191814] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:39.942 passed 00:09:39.942 Test: blockdev write read 8 blocks ...[2024-02-14 19:10:17.196116] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:39.942 passed 00:09:39.942 Test: blockdev write read size > 128k ...passed 00:09:39.942 Test: blockdev write read invalid size ...passed 00:09:39.942 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:39.942 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:39.942 Test: blockdev write read max offset ...passed 00:09:39.942 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:39.942 Test: blockdev writev readv 8 blocks ...passed 00:09:39.942 Test: blockdev writev readv 30 x 1block ...passed 00:09:39.942 Test: blockdev writev readv block ...passed 00:09:39.942 Test: blockdev writev readv size > 128k ...passed 00:09:39.942 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:39.942 Test: blockdev comparev and writev ...[2024-02-14 19:10:17.203798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28a104000 len:0x1000 00:09:39.942 [2024-02-14 19:10:17.203865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:39.942 passed 00:09:39.942 Test: blockdev nvme passthru rw ...passed 00:09:39.942 Test: blockdev nvme passthru vendor specific ...passed 00:09:39.942 Test: blockdev nvme admin passthru ...[2024-02-14 19:10:17.204721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:39.942 [2024-02-14 19:10:17.204754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:39.942 passed 00:09:39.942 Test: blockdev copy ...passed 00:09:39.942 Suite: bdevio tests on: Nvme2n2 00:09:39.942 Test: blockdev write read block ...passed 00:09:39.942 Test: blockdev write zeroes read block ...passed 00:09:39.942 Test: blockdev write zeroes read no split ...passed 00:09:39.942 Test: blockdev write zeroes read split ...passed 00:09:39.942 Test: blockdev write zeroes read split partial ...passed 00:09:39.942 Test: blockdev reset ...[2024-02-14 19:10:17.282709] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:39.942 passed 00:09:39.942 Test: blockdev write read 8 blocks ...[2024-02-14 19:10:17.286951] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:39.942 passed 00:09:39.942 Test: blockdev write read size > 128k ...passed 00:09:39.942 Test: blockdev write read invalid size ...passed 00:09:39.942 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:39.942 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:39.942 Test: blockdev write read max offset ...passed 00:09:39.942 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:39.942 Test: blockdev writev readv 8 blocks ...passed 00:09:39.942 Test: blockdev writev readv 30 x 1block ...passed 00:09:39.942 Test: blockdev writev readv block ...passed 00:09:39.942 Test: blockdev writev readv size > 128k ...passed 00:09:39.942 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:39.942 Test: blockdev comparev and writev ...passed 00:09:39.943 Test: blockdev nvme passthru rw ...[2024-02-14 19:10:17.294887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28a104000 len:0x1000 00:09:39.943 [2024-02-14 19:10:17.294955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:39.943 passed 00:09:39.943 Test: blockdev nvme passthru vendor specific ...passed 00:09:39.943 Test: blockdev nvme admin passthru ...[2024-02-14 19:10:17.295778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:39.943 [2024-02-14 19:10:17.295818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:39.943 passed 00:09:39.943 Test: blockdev copy ...passed 00:09:39.943 Suite: bdevio tests on: Nvme2n1 00:09:39.943 Test: blockdev write read block ...passed 00:09:39.943 Test: blockdev write zeroes read block ...passed 00:09:39.943 Test: blockdev write zeroes read no split ...passed 00:09:39.943 Test: blockdev write zeroes read split ...passed 00:09:40.202 Test: blockdev write zeroes read split partial ...passed 00:09:40.202 Test: blockdev reset ...[2024-02-14 19:10:17.372643] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:40.202 passed 00:09:40.202 Test: blockdev write read 8 blocks ...[2024-02-14 19:10:17.376290] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:40.202 passed 00:09:40.202 Test: blockdev write read size > 128k ...passed 00:09:40.202 Test: blockdev write read invalid size ...passed 00:09:40.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:40.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:40.202 Test: blockdev write read max offset ...passed 00:09:40.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:40.202 Test: blockdev writev readv 8 blocks ...passed 00:09:40.202 Test: blockdev writev readv 30 x 1block ...passed 00:09:40.202 Test: blockdev writev readv block ...passed 00:09:40.202 Test: blockdev writev readv size > 128k ...passed 00:09:40.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:40.202 Test: blockdev comparev and writev ...[2024-02-14 19:10:17.384723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29fa3c000 len:0x1000 00:09:40.202 [2024-02-14 19:10:17.384782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:40.202 passed 00:09:40.202 Test: blockdev nvme passthru rw ...passed 00:09:40.202 Test: blockdev nvme passthru vendor specific ...passed 00:09:40.202 Test: blockdev nvme admin passthru ...[2024-02-14 19:10:17.385625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:40.202 [2024-02-14 19:10:17.385678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:40.202 passed 00:09:40.202 Test: blockdev copy ...passed 00:09:40.202 Suite: bdevio tests on: Nvme1n1 00:09:40.202 Test: blockdev write read block ...passed 00:09:40.202 Test: blockdev write zeroes read block ...passed 00:09:40.202 Test: blockdev write zeroes read no split ...passed 00:09:40.202 Test: blockdev write zeroes read split ...passed 00:09:40.202 Test: blockdev write zeroes read split partial ...passed 00:09:40.202 Test: blockdev reset ...[2024-02-14 19:10:17.458459] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:40.202 passed 00:09:40.202 Test: blockdev write read 8 blocks ...[2024-02-14 19:10:17.461943] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:40.202 passed 00:09:40.202 Test: blockdev write read size > 128k ...passed 00:09:40.202 Test: blockdev write read invalid size ...passed 00:09:40.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:40.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:40.202 Test: blockdev write read max offset ...passed 00:09:40.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:40.202 Test: blockdev writev readv 8 blocks ...passed 00:09:40.202 Test: blockdev writev readv 30 x 1block ...passed 00:09:40.202 Test: blockdev writev readv block ...passed 00:09:40.202 Test: blockdev writev readv size > 128k ...passed 00:09:40.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:40.202 Test: blockdev comparev and writev ...[2024-02-14 19:10:17.470402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29fa38000 len:0x1000 00:09:40.202 [2024-02-14 19:10:17.470500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:40.202 passed 00:09:40.202 Test: blockdev nvme passthru rw ...passed 00:09:40.202 Test: blockdev nvme passthru vendor specific ...[2024-02-14 19:10:17.471324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:40.202 [2024-02-14 19:10:17.471377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:40.202 passed 00:09:40.202 Test: blockdev nvme admin passthru ...passed 00:09:40.202 Test: blockdev copy ...passed 00:09:40.202 Suite: bdevio tests on: Nvme0n1p2 00:09:40.202 Test: blockdev write read block ...passed 00:09:40.202 Test: blockdev write zeroes read block ...passed 00:09:40.202 Test: blockdev write zeroes read no split ...passed 00:09:40.202 Test: blockdev write zeroes read split ...passed 00:09:40.202 Test: blockdev write zeroes read split partial ...passed 00:09:40.202 Test: blockdev reset ...[2024-02-14 19:10:17.550174] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:40.202 passed 00:09:40.202 Test: blockdev write read 8 blocks ...[2024-02-14 19:10:17.553781] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:40.202 passed 00:09:40.202 Test: blockdev write read size > 128k ...passed 00:09:40.202 Test: blockdev write read invalid size ...passed 00:09:40.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:40.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:40.202 Test: blockdev write read max offset ...passed 00:09:40.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:40.202 Test: blockdev writev readv 8 blocks ...passed 00:09:40.202 Test: blockdev writev readv 30 x 1block ...passed 00:09:40.202 Test: blockdev writev readv block ...passed 00:09:40.202 Test: blockdev writev readv size > 128k ...passed 00:09:40.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:40.202 Test: blockdev comparev and writev ...passed 00:09:40.202 Test: blockdev nvme passthru rw ...passed 00:09:40.202 Test: blockdev nvme passthru vendor specific ...passed 00:09:40.202 Test: blockdev nvme admin passthru ...passed 00:09:40.202 Test: blockdev copy ...[2024-02-14 19:10:17.561768] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:40.202 separate metadata which is not supported yet. 00:09:40.202 passed 00:09:40.202 Suite: bdevio tests on: Nvme0n1p1 00:09:40.202 Test: blockdev write read block ...passed 00:09:40.202 Test: blockdev write zeroes read block ...passed 00:09:40.202 Test: blockdev write zeroes read no split ...passed 00:09:40.202 Test: blockdev write zeroes read split ...passed 00:09:40.461 Test: blockdev write zeroes read split partial ...passed 00:09:40.461 Test: blockdev reset ...[2024-02-14 19:10:17.630284] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:40.461 [2024-02-14 19:10:17.633852] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:40.461 passed 00:09:40.461 Test: blockdev write read 8 blocks ...passed 00:09:40.461 Test: blockdev write read size > 128k ...passed 00:09:40.461 Test: blockdev write read invalid size ...passed 00:09:40.461 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:40.461 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:40.461 Test: blockdev write read max offset ...passed 00:09:40.461 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:40.461 Test: blockdev writev readv 8 blocks ...passed 00:09:40.461 Test: blockdev writev readv 30 x 1block ...passed 00:09:40.461 Test: blockdev writev readv block ...passed 00:09:40.461 Test: blockdev writev readv size > 128k ...passed 00:09:40.461 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:40.461 Test: blockdev comparev and writev ...passed 00:09:40.461 Test: blockdev nvme passthru rw ...passed 00:09:40.461 Test: blockdev nvme passthru vendor specific ...passed 00:09:40.461 Test: blockdev nvme admin passthru ...passed 00:09:40.461 Test: blockdev copy ...[2024-02-14 19:10:17.641930] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:40.461 separate metadata which is not supported yet. 00:09:40.461 passed 00:09:40.461 00:09:40.461 Run Summary: Type Total Ran Passed Failed Inactive 00:09:40.461 suites 7 7 n/a 0 0 00:09:40.461 tests 161 161 161 0 0 00:09:40.461 asserts 1006 1006 1006 0 n/a 00:09:40.461 00:09:40.461 Elapsed time = 1.648 seconds 00:09:40.461 0 00:09:40.461 19:10:17 -- bdev/blockdev.sh@293 -- # killprocess 63161 00:09:40.461 19:10:17 -- common/autotest_common.sh@924 -- # '[' -z 63161 ']' 00:09:40.461 19:10:17 -- common/autotest_common.sh@928 -- # kill -0 63161 00:09:40.461 19:10:17 -- common/autotest_common.sh@929 -- # uname 00:09:40.461 19:10:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:09:40.461 19:10:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 63161 00:09:40.461 19:10:17 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:09:40.461 killing process with pid 63161 00:09:40.461 19:10:17 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:09:40.461 19:10:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 63161' 00:09:40.461 19:10:17 -- common/autotest_common.sh@943 -- # kill 63161 00:09:40.461 [2024-02-14 19:10:17.690365] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:40.461 19:10:17 -- common/autotest_common.sh@948 -- # wait 63161 00:09:41.394 19:10:18 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:41.394 00:09:41.394 real 0m3.471s 00:09:41.394 user 0m9.329s 00:09:41.394 sys 0m0.381s 00:09:41.394 ************************************ 00:09:41.394 END TEST bdev_bounds 00:09:41.394 ************************************ 00:09:41.394 19:10:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:41.394 19:10:18 -- common/autotest_common.sh@10 -- # set +x 00:09:41.394 19:10:18 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:41.394 19:10:18 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:09:41.394 19:10:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:41.394 19:10:18 -- common/autotest_common.sh@10 -- # set +x 00:09:41.394 ************************************ 00:09:41.394 START TEST bdev_nbd 00:09:41.394 ************************************ 00:09:41.394 19:10:18 -- common/autotest_common.sh@1102 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:41.394 19:10:18 -- bdev/blockdev.sh@298 -- # uname -s 00:09:41.394 19:10:18 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:41.394 19:10:18 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.394 19:10:18 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:41.394 19:10:18 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:41.394 19:10:18 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:41.394 19:10:18 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:09:41.394 19:10:18 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:41.394 19:10:18 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:41.394 19:10:18 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:41.394 19:10:18 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:09:41.394 19:10:18 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:41.394 19:10:18 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:41.394 19:10:18 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:41.394 19:10:18 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:41.394 19:10:18 -- bdev/blockdev.sh@316 -- # nbd_pid=63229 00:09:41.394 19:10:18 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:41.394 19:10:18 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:41.394 19:10:18 -- bdev/blockdev.sh@318 -- # waitforlisten 63229 /var/tmp/spdk-nbd.sock 00:09:41.394 19:10:18 -- common/autotest_common.sh@817 -- # '[' -z 63229 ']' 00:09:41.394 19:10:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:41.394 19:10:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:41.394 19:10:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:41.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:41.394 19:10:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:41.394 19:10:18 -- common/autotest_common.sh@10 -- # set +x 00:09:41.394 [2024-02-14 19:10:18.732662] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:41.394 [2024-02-14 19:10:18.732842] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.653 [2024-02-14 19:10:18.905607] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.911 [2024-02-14 19:10:19.071783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.911 [2024-02-14 19:10:19.071889] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:42.479 19:10:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:42.479 19:10:19 -- common/autotest_common.sh@850 -- # return 0 00:09:42.479 19:10:19 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:42.479 19:10:19 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.479 19:10:19 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:42.479 19:10:19 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:42.479 19:10:19 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:42.479 19:10:19 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.479 19:10:19 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:42.479 19:10:19 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:42.479 19:10:19 -- bdev/nbd_common.sh@24 -- # local i 00:09:42.479 19:10:19 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:42.479 19:10:19 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:42.479 19:10:19 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:42.479 19:10:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:42.737 19:10:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:42.737 19:10:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:42.737 19:10:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:42.737 19:10:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:09:42.737 19:10:19 -- common/autotest_common.sh@855 -- # local i 00:09:42.737 19:10:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:42.737 19:10:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:42.737 19:10:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:09:42.737 19:10:19 -- common/autotest_common.sh@859 -- # break 00:09:42.737 19:10:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:42.737 19:10:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:42.737 19:10:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.737 1+0 records in 00:09:42.737 1+0 records out 00:09:42.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767088 s, 5.3 MB/s 00:09:42.737 19:10:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.737 19:10:19 -- common/autotest_common.sh@872 -- # size=4096 00:09:42.737 19:10:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.737 19:10:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:42.737 19:10:19 -- common/autotest_common.sh@875 -- # return 0 00:09:42.737 19:10:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:42.737 19:10:19 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:42.737 19:10:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:42.995 19:10:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:42.995 19:10:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:42.995 19:10:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:42.995 19:10:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:09:42.995 19:10:20 -- common/autotest_common.sh@855 -- # local i 00:09:42.995 19:10:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:42.995 19:10:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:42.995 19:10:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:09:42.995 19:10:20 -- common/autotest_common.sh@859 -- # break 00:09:42.995 19:10:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:42.995 19:10:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:42.995 19:10:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.995 1+0 records in 00:09:42.995 1+0 records out 00:09:42.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725482 s, 5.6 MB/s 00:09:42.995 19:10:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.995 19:10:20 -- common/autotest_common.sh@872 -- # size=4096 00:09:42.995 19:10:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.995 19:10:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:42.995 19:10:20 -- common/autotest_common.sh@875 -- # return 0 00:09:42.995 19:10:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:42.995 19:10:20 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:42.995 19:10:20 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:43.254 19:10:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:43.254 19:10:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:43.254 19:10:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:43.254 19:10:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:09:43.254 19:10:20 -- common/autotest_common.sh@855 -- # local i 00:09:43.254 19:10:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:43.254 19:10:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:43.254 19:10:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:09:43.254 19:10:20 -- common/autotest_common.sh@859 -- # break 00:09:43.254 19:10:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:43.254 19:10:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:43.254 19:10:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.254 1+0 records in 00:09:43.254 1+0 records out 00:09:43.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811357 s, 5.0 MB/s 00:09:43.254 19:10:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.254 19:10:20 -- common/autotest_common.sh@872 -- # size=4096 00:09:43.254 19:10:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.254 19:10:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:43.254 19:10:20 -- common/autotest_common.sh@875 -- # return 0 00:09:43.254 19:10:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:43.254 19:10:20 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:43.254 19:10:20 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:43.513 19:10:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:43.513 19:10:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:43.513 19:10:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:43.513 19:10:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:09:43.513 19:10:20 -- common/autotest_common.sh@855 -- # local i 00:09:43.513 19:10:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:43.513 19:10:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:43.513 19:10:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:09:43.513 19:10:20 -- common/autotest_common.sh@859 -- # break 00:09:43.513 19:10:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:43.513 19:10:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:43.513 19:10:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.513 1+0 records in 00:09:43.513 1+0 records out 00:09:43.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000669878 s, 6.1 MB/s 00:09:43.513 19:10:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.513 19:10:20 -- common/autotest_common.sh@872 -- # size=4096 00:09:43.513 19:10:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.513 19:10:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:43.513 19:10:20 -- common/autotest_common.sh@875 -- # return 0 00:09:43.513 19:10:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:43.513 19:10:20 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:43.513 19:10:20 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:43.771 19:10:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:43.771 19:10:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:43.771 19:10:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:43.771 19:10:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:09:43.771 19:10:21 -- common/autotest_common.sh@855 -- # local i 00:09:43.771 19:10:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:43.771 19:10:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:43.771 19:10:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:09:43.771 19:10:21 -- common/autotest_common.sh@859 -- # break 00:09:43.771 19:10:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:43.771 19:10:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:43.771 19:10:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.771 1+0 records in 00:09:43.771 1+0 records out 00:09:43.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000634828 s, 6.5 MB/s 00:09:43.771 19:10:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.771 19:10:21 -- common/autotest_common.sh@872 -- # size=4096 00:09:43.771 19:10:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.771 19:10:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:43.771 19:10:21 -- common/autotest_common.sh@875 -- # return 0 00:09:43.771 19:10:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:43.771 19:10:21 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:43.771 19:10:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:44.029 19:10:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:44.029 19:10:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:44.029 19:10:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:44.029 19:10:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:09:44.029 19:10:21 -- common/autotest_common.sh@855 -- # local i 00:09:44.029 19:10:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:44.029 19:10:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:44.029 19:10:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:09:44.029 19:10:21 -- common/autotest_common.sh@859 -- # break 00:09:44.029 19:10:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:44.029 19:10:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:44.029 19:10:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.029 1+0 records in 00:09:44.029 1+0 records out 00:09:44.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776597 s, 5.3 MB/s 00:09:44.029 19:10:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.029 19:10:21 -- common/autotest_common.sh@872 -- # size=4096 00:09:44.029 19:10:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.029 19:10:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:44.029 19:10:21 -- common/autotest_common.sh@875 -- # return 0 00:09:44.029 19:10:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:44.029 19:10:21 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:44.029 19:10:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:44.596 19:10:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:44.596 19:10:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:44.596 19:10:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:44.596 19:10:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:09:44.596 19:10:21 -- common/autotest_common.sh@855 -- # local i 00:09:44.596 19:10:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:44.596 19:10:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:44.596 19:10:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:09:44.596 19:10:21 -- common/autotest_common.sh@859 -- # break 00:09:44.596 19:10:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:44.596 19:10:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:44.596 19:10:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.596 1+0 records in 00:09:44.596 1+0 records out 00:09:44.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100655 s, 4.1 MB/s 00:09:44.596 19:10:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.596 19:10:21 -- common/autotest_common.sh@872 -- # size=4096 00:09:44.596 19:10:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.596 19:10:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:44.596 19:10:21 -- common/autotest_common.sh@875 -- # return 0 00:09:44.596 19:10:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:44.596 19:10:21 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:44.596 19:10:21 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:44.596 19:10:22 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd0", 00:09:44.596 "bdev_name": "Nvme0n1p1" 00:09:44.596 }, 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd1", 00:09:44.596 "bdev_name": "Nvme0n1p2" 00:09:44.596 }, 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd2", 00:09:44.596 "bdev_name": "Nvme1n1" 00:09:44.596 }, 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd3", 00:09:44.596 "bdev_name": "Nvme2n1" 00:09:44.596 }, 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd4", 00:09:44.596 "bdev_name": "Nvme2n2" 00:09:44.596 }, 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd5", 00:09:44.596 "bdev_name": "Nvme2n3" 00:09:44.596 }, 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd6", 00:09:44.596 "bdev_name": "Nvme3n1" 00:09:44.596 } 00:09:44.596 ]' 00:09:44.596 19:10:22 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:44.596 19:10:22 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd0", 00:09:44.596 "bdev_name": "Nvme0n1p1" 00:09:44.596 }, 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd1", 00:09:44.596 "bdev_name": "Nvme0n1p2" 00:09:44.596 }, 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd2", 00:09:44.596 "bdev_name": "Nvme1n1" 00:09:44.596 }, 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd3", 00:09:44.596 "bdev_name": "Nvme2n1" 00:09:44.596 }, 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd4", 00:09:44.596 "bdev_name": "Nvme2n2" 00:09:44.596 }, 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd5", 00:09:44.596 "bdev_name": "Nvme2n3" 00:09:44.596 }, 00:09:44.596 { 00:09:44.596 "nbd_device": "/dev/nbd6", 00:09:44.596 "bdev_name": "Nvme3n1" 00:09:44.596 } 00:09:44.596 ]' 00:09:44.596 19:10:22 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:44.855 19:10:22 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:44.855 19:10:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.855 19:10:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:44.855 19:10:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:44.855 19:10:22 -- bdev/nbd_common.sh@51 -- # local i 00:09:44.855 19:10:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:44.855 19:10:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:45.114 19:10:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:45.114 19:10:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:45.114 19:10:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:45.114 19:10:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.114 19:10:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.114 19:10:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:45.114 19:10:22 -- bdev/nbd_common.sh@41 -- # break 00:09:45.114 19:10:22 -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.114 19:10:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.114 19:10:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:45.372 19:10:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:45.372 19:10:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:45.372 19:10:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:45.372 19:10:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.372 19:10:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.372 19:10:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:45.372 19:10:22 -- bdev/nbd_common.sh@41 -- # break 00:09:45.372 19:10:22 -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.372 19:10:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.372 19:10:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:45.630 19:10:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:45.630 19:10:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:45.630 19:10:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:45.630 19:10:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.630 19:10:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.630 19:10:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:45.630 19:10:22 -- bdev/nbd_common.sh@41 -- # break 00:09:45.630 19:10:22 -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.630 19:10:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.630 19:10:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:45.889 19:10:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:45.889 19:10:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:45.889 19:10:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:45.889 19:10:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.889 19:10:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.889 19:10:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:45.889 19:10:23 -- bdev/nbd_common.sh@41 -- # break 00:09:45.889 19:10:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.889 19:10:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.889 19:10:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:46.146 19:10:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:46.146 19:10:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:46.146 19:10:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:46.146 19:10:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.146 19:10:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.146 19:10:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:46.146 19:10:23 -- bdev/nbd_common.sh@41 -- # break 00:09:46.146 19:10:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.146 19:10:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.146 19:10:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:46.404 19:10:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:46.404 19:10:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:46.404 19:10:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:46.404 19:10:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.404 19:10:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.404 19:10:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:46.404 19:10:23 -- bdev/nbd_common.sh@41 -- # break 00:09:46.404 19:10:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.404 19:10:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.404 19:10:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:46.663 19:10:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:46.663 19:10:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:46.663 19:10:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:46.663 19:10:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.663 19:10:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.663 19:10:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:46.663 19:10:23 -- bdev/nbd_common.sh@41 -- # break 00:09:46.663 19:10:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.663 19:10:23 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:46.663 19:10:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.663 19:10:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.663 19:10:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:46.663 19:10:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.663 19:10:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@65 -- # true 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@65 -- # count=0 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@122 -- # count=0 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@127 -- # return 0 00:09:46.921 19:10:24 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@12 -- # local i 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:46.921 19:10:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:47.180 /dev/nbd0 00:09:47.180 19:10:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:47.180 19:10:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:47.180 19:10:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:09:47.180 19:10:24 -- common/autotest_common.sh@855 -- # local i 00:09:47.180 19:10:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:47.180 19:10:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:47.180 19:10:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:09:47.180 19:10:24 -- common/autotest_common.sh@859 -- # break 00:09:47.180 19:10:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:47.180 19:10:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:47.180 19:10:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:47.180 1+0 records in 00:09:47.180 1+0 records out 00:09:47.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707234 s, 5.8 MB/s 00:09:47.180 19:10:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.180 19:10:24 -- common/autotest_common.sh@872 -- # size=4096 00:09:47.180 19:10:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.180 19:10:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:47.180 19:10:24 -- common/autotest_common.sh@875 -- # return 0 00:09:47.180 19:10:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.180 19:10:24 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:47.180 19:10:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:47.439 /dev/nbd1 00:09:47.439 19:10:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:47.439 19:10:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:47.439 19:10:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:09:47.439 19:10:24 -- common/autotest_common.sh@855 -- # local i 00:09:47.439 19:10:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:47.439 19:10:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:47.439 19:10:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:09:47.439 19:10:24 -- common/autotest_common.sh@859 -- # break 00:09:47.439 19:10:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:47.439 19:10:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:47.439 19:10:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:47.439 1+0 records in 00:09:47.439 1+0 records out 00:09:47.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530479 s, 7.7 MB/s 00:09:47.439 19:10:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.439 19:10:24 -- common/autotest_common.sh@872 -- # size=4096 00:09:47.439 19:10:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.439 19:10:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:47.439 19:10:24 -- common/autotest_common.sh@875 -- # return 0 00:09:47.439 19:10:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.439 19:10:24 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:47.439 19:10:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:47.698 /dev/nbd10 00:09:47.698 19:10:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:47.698 19:10:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:47.698 19:10:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:09:47.698 19:10:24 -- common/autotest_common.sh@855 -- # local i 00:09:47.698 19:10:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:47.698 19:10:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:47.698 19:10:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:09:47.698 19:10:24 -- common/autotest_common.sh@859 -- # break 00:09:47.698 19:10:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:47.698 19:10:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:47.698 19:10:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:47.698 1+0 records in 00:09:47.698 1+0 records out 00:09:47.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490769 s, 8.3 MB/s 00:09:47.698 19:10:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.698 19:10:24 -- common/autotest_common.sh@872 -- # size=4096 00:09:47.698 19:10:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.698 19:10:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:47.698 19:10:24 -- common/autotest_common.sh@875 -- # return 0 00:09:47.698 19:10:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.698 19:10:24 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:47.698 19:10:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:47.957 /dev/nbd11 00:09:47.957 19:10:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:47.957 19:10:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:47.957 19:10:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:09:47.957 19:10:25 -- common/autotest_common.sh@855 -- # local i 00:09:47.957 19:10:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:47.957 19:10:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:47.957 19:10:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:09:47.957 19:10:25 -- common/autotest_common.sh@859 -- # break 00:09:47.957 19:10:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:47.957 19:10:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:47.957 19:10:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:47.957 1+0 records in 00:09:47.957 1+0 records out 00:09:47.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100897 s, 4.1 MB/s 00:09:47.957 19:10:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.957 19:10:25 -- common/autotest_common.sh@872 -- # size=4096 00:09:47.957 19:10:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.957 19:10:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:47.957 19:10:25 -- common/autotest_common.sh@875 -- # return 0 00:09:47.957 19:10:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.957 19:10:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:47.957 19:10:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:48.216 /dev/nbd12 00:09:48.216 19:10:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:48.216 19:10:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:48.216 19:10:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:09:48.216 19:10:25 -- common/autotest_common.sh@855 -- # local i 00:09:48.216 19:10:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:48.216 19:10:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:48.216 19:10:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:09:48.216 19:10:25 -- common/autotest_common.sh@859 -- # break 00:09:48.216 19:10:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:48.216 19:10:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:48.216 19:10:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.216 1+0 records in 00:09:48.216 1+0 records out 00:09:48.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603846 s, 6.8 MB/s 00:09:48.216 19:10:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.216 19:10:25 -- common/autotest_common.sh@872 -- # size=4096 00:09:48.216 19:10:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.216 19:10:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:48.216 19:10:25 -- common/autotest_common.sh@875 -- # return 0 00:09:48.216 19:10:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.216 19:10:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:48.216 19:10:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:48.474 /dev/nbd13 00:09:48.474 19:10:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:48.474 19:10:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:48.474 19:10:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:09:48.474 19:10:25 -- common/autotest_common.sh@855 -- # local i 00:09:48.474 19:10:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:48.474 19:10:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:48.474 19:10:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:09:48.474 19:10:25 -- common/autotest_common.sh@859 -- # break 00:09:48.474 19:10:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:48.474 19:10:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:48.474 19:10:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.474 1+0 records in 00:09:48.474 1+0 records out 00:09:48.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000907037 s, 4.5 MB/s 00:09:48.474 19:10:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.474 19:10:25 -- common/autotest_common.sh@872 -- # size=4096 00:09:48.474 19:10:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.474 19:10:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:48.474 19:10:25 -- common/autotest_common.sh@875 -- # return 0 00:09:48.474 19:10:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.474 19:10:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:48.474 19:10:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:48.733 /dev/nbd14 00:09:48.733 19:10:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:48.733 19:10:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:48.733 19:10:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:09:48.733 19:10:26 -- common/autotest_common.sh@855 -- # local i 00:09:48.733 19:10:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:48.733 19:10:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:48.733 19:10:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:09:48.733 19:10:26 -- common/autotest_common.sh@859 -- # break 00:09:48.733 19:10:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:48.733 19:10:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:48.733 19:10:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.733 1+0 records in 00:09:48.733 1+0 records out 00:09:48.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594822 s, 6.9 MB/s 00:09:48.733 19:10:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.733 19:10:26 -- common/autotest_common.sh@872 -- # size=4096 00:09:48.733 19:10:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.733 19:10:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:48.733 19:10:26 -- common/autotest_common.sh@875 -- # return 0 00:09:48.733 19:10:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.733 19:10:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:48.733 19:10:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:48.733 19:10:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.733 19:10:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd0", 00:09:48.992 "bdev_name": "Nvme0n1p1" 00:09:48.992 }, 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd1", 00:09:48.992 "bdev_name": "Nvme0n1p2" 00:09:48.992 }, 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd10", 00:09:48.992 "bdev_name": "Nvme1n1" 00:09:48.992 }, 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd11", 00:09:48.992 "bdev_name": "Nvme2n1" 00:09:48.992 }, 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd12", 00:09:48.992 "bdev_name": "Nvme2n2" 00:09:48.992 }, 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd13", 00:09:48.992 "bdev_name": "Nvme2n3" 00:09:48.992 }, 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd14", 00:09:48.992 "bdev_name": "Nvme3n1" 00:09:48.992 } 00:09:48.992 ]' 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd0", 00:09:48.992 "bdev_name": "Nvme0n1p1" 00:09:48.992 }, 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd1", 00:09:48.992 "bdev_name": "Nvme0n1p2" 00:09:48.992 }, 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd10", 00:09:48.992 "bdev_name": "Nvme1n1" 00:09:48.992 }, 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd11", 00:09:48.992 "bdev_name": "Nvme2n1" 00:09:48.992 }, 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd12", 00:09:48.992 "bdev_name": "Nvme2n2" 00:09:48.992 }, 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd13", 00:09:48.992 "bdev_name": "Nvme2n3" 00:09:48.992 }, 00:09:48.992 { 00:09:48.992 "nbd_device": "/dev/nbd14", 00:09:48.992 "bdev_name": "Nvme3n1" 00:09:48.992 } 00:09:48.992 ]' 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:48.992 /dev/nbd1 00:09:48.992 /dev/nbd10 00:09:48.992 /dev/nbd11 00:09:48.992 /dev/nbd12 00:09:48.992 /dev/nbd13 00:09:48.992 /dev/nbd14' 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:48.992 /dev/nbd1 00:09:48.992 /dev/nbd10 00:09:48.992 /dev/nbd11 00:09:48.992 /dev/nbd12 00:09:48.992 /dev/nbd13 00:09:48.992 /dev/nbd14' 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@65 -- # count=7 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@66 -- # echo 7 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@95 -- # count=7 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:48.992 19:10:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:48.993 19:10:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:48.993 19:10:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:48.993 19:10:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:48.993 19:10:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:48.993 256+0 records in 00:09:48.993 256+0 records out 00:09:48.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00909336 s, 115 MB/s 00:09:48.993 19:10:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.993 19:10:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:49.251 256+0 records in 00:09:49.251 256+0 records out 00:09:49.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163487 s, 6.4 MB/s 00:09:49.251 19:10:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:49.251 19:10:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:49.510 256+0 records in 00:09:49.510 256+0 records out 00:09:49.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177402 s, 5.9 MB/s 00:09:49.510 19:10:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:49.510 19:10:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:49.510 256+0 records in 00:09:49.510 256+0 records out 00:09:49.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171011 s, 6.1 MB/s 00:09:49.510 19:10:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:49.510 19:10:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:49.769 256+0 records in 00:09:49.769 256+0 records out 00:09:49.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144863 s, 7.2 MB/s 00:09:49.769 19:10:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:49.769 19:10:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:50.027 256+0 records in 00:09:50.027 256+0 records out 00:09:50.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165314 s, 6.3 MB/s 00:09:50.027 19:10:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:50.027 19:10:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:50.027 256+0 records in 00:09:50.027 256+0 records out 00:09:50.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168808 s, 6.2 MB/s 00:09:50.027 19:10:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:50.027 19:10:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:50.286 256+0 records in 00:09:50.286 256+0 records out 00:09:50.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171987 s, 6.1 MB/s 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@51 -- # local i 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.286 19:10:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:50.545 19:10:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:50.545 19:10:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:50.545 19:10:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:50.545 19:10:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.545 19:10:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.545 19:10:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:50.545 19:10:27 -- bdev/nbd_common.sh@41 -- # break 00:09:50.545 19:10:27 -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.545 19:10:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.545 19:10:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@41 -- # break 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@41 -- # break 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.113 19:10:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:51.700 19:10:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:51.700 19:10:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:51.700 19:10:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:51.700 19:10:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.700 19:10:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.700 19:10:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:51.700 19:10:28 -- bdev/nbd_common.sh@41 -- # break 00:09:51.700 19:10:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.700 19:10:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.700 19:10:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:51.700 19:10:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:51.700 19:10:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:51.700 19:10:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:51.700 19:10:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.700 19:10:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.700 19:10:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:51.700 19:10:29 -- bdev/nbd_common.sh@41 -- # break 00:09:51.700 19:10:29 -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.700 19:10:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.700 19:10:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:51.964 19:10:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:51.964 19:10:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:51.964 19:10:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:51.964 19:10:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.964 19:10:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.964 19:10:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:51.964 19:10:29 -- bdev/nbd_common.sh@41 -- # break 00:09:51.964 19:10:29 -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.964 19:10:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.964 19:10:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:52.223 19:10:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:52.223 19:10:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:52.223 19:10:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:52.223 19:10:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.223 19:10:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.223 19:10:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:52.223 19:10:29 -- bdev/nbd_common.sh@41 -- # break 00:09:52.223 19:10:29 -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.223 19:10:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:52.223 19:10:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.223 19:10:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:52.481 19:10:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:52.481 19:10:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:52.481 19:10:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@65 -- # true 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@65 -- # count=0 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@104 -- # count=0 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@109 -- # return 0 00:09:52.740 19:10:29 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:52.740 19:10:29 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:52.998 malloc_lvol_verify 00:09:52.998 19:10:30 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:53.256 8c3b3ec7-1204-44b0-a75d-3552cf186d10 00:09:53.256 19:10:30 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:53.515 cf991dc2-1e21-43b7-8ae7-1254fe3ab9be 00:09:53.515 19:10:30 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:53.515 /dev/nbd0 00:09:53.773 19:10:30 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:53.773 mke2fs 1.46.5 (30-Dec-2021) 00:09:53.773 Discarding device blocks: 0/4096 done 00:09:53.773 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:53.773 00:09:53.773 Allocating group tables: 0/1 done 00:09:53.773 Writing inode tables: 0/1 done 00:09:53.773 Creating journal (1024 blocks): done 00:09:53.773 Writing superblocks and filesystem accounting information: 0/1 done 00:09:53.773 00:09:53.773 19:10:30 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:53.773 19:10:30 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:53.773 19:10:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.773 19:10:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:53.773 19:10:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:53.773 19:10:30 -- bdev/nbd_common.sh@51 -- # local i 00:09:53.773 19:10:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.773 19:10:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:54.032 19:10:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:54.032 19:10:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:54.032 19:10:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:54.032 19:10:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.032 19:10:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.032 19:10:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:54.032 19:10:31 -- bdev/nbd_common.sh@41 -- # break 00:09:54.032 19:10:31 -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.032 19:10:31 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:54.032 19:10:31 -- bdev/nbd_common.sh@147 -- # return 0 00:09:54.032 19:10:31 -- bdev/blockdev.sh@324 -- # killprocess 63229 00:09:54.032 19:10:31 -- common/autotest_common.sh@924 -- # '[' -z 63229 ']' 00:09:54.032 19:10:31 -- common/autotest_common.sh@928 -- # kill -0 63229 00:09:54.032 19:10:31 -- common/autotest_common.sh@929 -- # uname 00:09:54.032 19:10:31 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:09:54.032 19:10:31 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 63229 00:09:54.032 19:10:31 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:09:54.032 19:10:31 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:09:54.032 killing process with pid 63229 00:09:54.032 19:10:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 63229' 00:09:54.032 19:10:31 -- common/autotest_common.sh@943 -- # kill 63229 00:09:54.032 [2024-02-14 19:10:31.236594] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:54.032 19:10:31 -- common/autotest_common.sh@948 -- # wait 63229 00:09:54.967 19:10:32 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:09:54.967 00:09:54.967 real 0m13.748s 00:09:54.967 user 0m19.331s 00:09:54.967 sys 0m4.619s 00:09:54.967 19:10:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:54.967 19:10:32 -- common/autotest_common.sh@10 -- # set +x 00:09:54.967 ************************************ 00:09:54.967 END TEST bdev_nbd 00:09:54.967 ************************************ 00:09:55.226 19:10:32 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:09:55.226 19:10:32 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:09:55.226 19:10:32 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:09:55.226 skipping fio tests on NVMe due to multi-ns failures. 00:09:55.226 19:10:32 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:55.226 19:10:32 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:55.226 19:10:32 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:55.226 19:10:32 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:09:55.226 19:10:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:55.226 19:10:32 -- common/autotest_common.sh@10 -- # set +x 00:09:55.226 ************************************ 00:09:55.226 START TEST bdev_verify 00:09:55.226 ************************************ 00:09:55.226 19:10:32 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:55.226 [2024-02-14 19:10:32.528126] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:55.226 [2024-02-14 19:10:32.528856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63668 ] 00:09:55.484 [2024-02-14 19:10:32.699937] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:55.484 [2024-02-14 19:10:32.879431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.484 [2024-02-14 19:10:32.879445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.484 [2024-02-14 19:10:32.879668] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:56.420 Running I/O for 5 seconds... 00:10:01.683 00:10:01.683 Latency(us) 00:10:01.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.683 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:01.683 Verification LBA range: start 0x0 length 0x5e800 00:10:01.683 Nvme0n1p1 : 5.05 2396.94 9.36 0.00 0.00 53252.04 7119.59 59339.87 00:10:01.683 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:01.683 Verification LBA range: start 0x5e800 length 0x5e800 00:10:01.683 Nvme0n1p1 : 5.05 2396.70 9.36 0.00 0.00 53253.02 7089.80 60293.12 00:10:01.683 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:01.683 Verification LBA range: start 0x0 length 0x5e7ff 00:10:01.683 Nvme0n1p2 : 5.05 2395.75 9.36 0.00 0.00 53219.24 8340.95 56718.43 00:10:01.683 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:01.683 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:10:01.683 Nvme0n1p2 : 5.05 2395.56 9.36 0.00 0.00 53219.96 8102.63 58386.62 00:10:01.683 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:01.683 Verification LBA range: start 0x0 length 0xa0000 00:10:01.683 Nvme1n1 : 5.05 2394.62 9.35 0.00 0.00 53153.54 9711.24 50760.61 00:10:01.683 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:01.683 Verification LBA range: start 0xa0000 length 0xa0000 00:10:01.683 Nvme1n1 : 5.06 2399.30 9.37 0.00 0.00 53078.39 4438.57 52428.80 00:10:01.683 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:01.683 Verification LBA range: start 0x0 length 0x80000 00:10:01.683 Nvme2n1 : 5.06 2399.17 9.37 0.00 0.00 53016.79 3515.11 49569.05 00:10:01.683 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:01.683 Verification LBA range: start 0x80000 length 0x80000 00:10:01.683 Nvme2n1 : 5.06 2398.03 9.37 0.00 0.00 53025.22 6225.92 50998.92 00:10:01.683 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:01.683 Verification LBA range: start 0x0 length 0x80000 00:10:01.683 Nvme2n2 : 5.06 2397.92 9.37 0.00 0.00 52975.78 5093.93 50045.67 00:10:01.683 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:01.683 Verification LBA range: start 0x80000 length 0x80000 00:10:01.683 Nvme2n2 : 5.06 2396.85 9.36 0.00 0.00 52982.76 7626.01 52190.49 00:10:01.683 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:01.683 Verification LBA range: start 0x0 length 0x80000 00:10:01.683 Nvme2n3 : 5.06 2396.73 9.36 0.00 0.00 52953.03 6553.60 50283.99 00:10:01.683 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:01.683 Verification LBA range: start 0x80000 length 0x80000 00:10:01.683 Nvme2n3 : 5.06 2395.61 9.36 0.00 0.00 52936.00 9234.62 53858.68 00:10:01.683 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:01.684 Verification LBA range: start 0x0 length 0x20000 00:10:01.684 Nvme3n1 : 5.06 2395.49 9.36 0.00 0.00 52922.30 8162.21 50522.30 00:10:01.684 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:01.684 Verification LBA range: start 0x20000 length 0x20000 00:10:01.684 Nvme3n1 : 5.07 2394.41 9.35 0.00 0.00 52903.33 10664.49 55526.87 00:10:01.684 =================================================================================================================== 00:10:01.684 Total : 33553.09 131.07 0.00 0.00 53063.51 3515.11 60293.12 00:10:01.684 [2024-02-14 19:10:38.654536] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:03.586 00:10:03.586 real 0m8.485s 00:10:03.586 user 0m15.658s 00:10:03.586 sys 0m0.269s 00:10:03.586 19:10:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:03.586 19:10:40 -- common/autotest_common.sh@10 -- # set +x 00:10:03.586 ************************************ 00:10:03.586 END TEST bdev_verify 00:10:03.586 ************************************ 00:10:03.586 19:10:40 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:03.586 19:10:40 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:10:03.586 19:10:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:03.586 19:10:40 -- common/autotest_common.sh@10 -- # set +x 00:10:03.586 ************************************ 00:10:03.586 START TEST bdev_verify_big_io 00:10:03.586 ************************************ 00:10:03.586 19:10:40 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:03.845 [2024-02-14 19:10:41.062462] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:03.845 [2024-02-14 19:10:41.062642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63777 ] 00:10:03.845 [2024-02-14 19:10:41.235332] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:04.104 [2024-02-14 19:10:41.412529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.104 [2024-02-14 19:10:41.412529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.104 [2024-02-14 19:10:41.412700] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:05.039 Running I/O for 5 seconds... 00:10:10.310 00:10:10.310 Latency(us) 00:10:10.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.310 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x0 length 0x5e80 00:10:10.310 Nvme0n1p1 : 5.44 223.23 13.95 0.00 0.00 565243.63 29550.78 751161.72 00:10:10.310 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x5e80 length 0x5e80 00:10:10.310 Nvme0n1p1 : 5.41 231.58 14.47 0.00 0.00 541507.19 56241.80 693966.66 00:10:10.310 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x0 length 0x5e7f 00:10:10.310 Nvme0n1p2 : 5.44 223.12 13.95 0.00 0.00 557768.39 30265.72 701592.67 00:10:10.310 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x5e7f length 0x5e7f 00:10:10.310 Nvme0n1p2 : 5.41 231.49 14.47 0.00 0.00 534785.55 55288.55 648210.62 00:10:10.310 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x0 length 0xa000 00:10:10.310 Nvme1n1 : 5.44 223.03 13.94 0.00 0.00 550131.19 30504.03 648210.62 00:10:10.310 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0xa000 length 0xa000 00:10:10.310 Nvme1n1 : 5.41 231.41 14.46 0.00 0.00 528127.44 55765.18 598641.57 00:10:10.310 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x0 length 0x8000 00:10:10.310 Nvme2n1 : 5.45 222.93 13.93 0.00 0.00 542576.83 31218.97 591015.56 00:10:10.310 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x8000 length 0x8000 00:10:10.310 Nvme2n1 : 5.42 231.33 14.46 0.00 0.00 521367.77 56003.49 552885.53 00:10:10.310 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x0 length 0x8000 00:10:10.310 Nvme2n2 : 5.46 229.36 14.33 0.00 0.00 521427.34 12749.73 606267.58 00:10:10.310 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x8000 length 0x8000 00:10:10.310 Nvme2n2 : 5.44 238.46 14.90 0.00 0.00 502455.38 19779.96 503316.48 00:10:10.310 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x0 length 0x8000 00:10:10.310 Nvme2n3 : 5.46 229.29 14.33 0.00 0.00 514026.48 12809.31 827421.79 00:10:10.310 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x8000 length 0x8000 00:10:10.310 Nvme2n3 : 5.45 247.73 15.48 0.00 0.00 479942.96 3247.01 459466.94 00:10:10.310 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x0 length 0x2000 00:10:10.310 Nvme3n1 : 5.47 246.45 15.40 0.00 0.00 473055.17 3961.95 613893.59 00:10:10.310 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.310 Verification LBA range: start 0x2000 length 0x2000 00:10:10.310 Nvme3n1 : 5.45 255.04 15.94 0.00 0.00 460381.77 3872.58 459466.94 00:10:10.310 =================================================================================================================== 00:10:10.310 Total : 3264.44 204.03 0.00 0.00 519627.83 3247.01 827421.79 00:10:10.570 [2024-02-14 19:10:47.953241] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:12.470 00:10:12.470 real 0m8.472s 00:10:12.470 user 0m15.622s 00:10:12.470 sys 0m0.293s 00:10:12.470 19:10:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:12.470 19:10:49 -- common/autotest_common.sh@10 -- # set +x 00:10:12.470 ************************************ 00:10:12.470 END TEST bdev_verify_big_io 00:10:12.470 ************************************ 00:10:12.470 19:10:49 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:12.470 19:10:49 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:10:12.470 19:10:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:12.470 19:10:49 -- common/autotest_common.sh@10 -- # set +x 00:10:12.470 ************************************ 00:10:12.470 START TEST bdev_write_zeroes 00:10:12.470 ************************************ 00:10:12.470 19:10:49 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:12.470 [2024-02-14 19:10:49.566145] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:12.470 [2024-02-14 19:10:49.566283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63892 ] 00:10:12.470 [2024-02-14 19:10:49.725149] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.729 [2024-02-14 19:10:49.907353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.729 [2024-02-14 19:10:49.907452] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:13.306 Running I/O for 1 seconds... 00:10:14.240 00:10:14.240 Latency(us) 00:10:14.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.240 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.240 Nvme0n1p1 : 1.02 6654.90 26.00 0.00 0.00 19138.00 8936.73 52905.43 00:10:14.240 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.240 Nvme0n1p2 : 1.02 6643.38 25.95 0.00 0.00 19134.76 10485.76 52667.11 00:10:14.240 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.240 Nvme1n1 : 1.03 6679.54 26.09 0.00 0.00 18992.72 11498.59 51713.86 00:10:14.240 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.240 Nvme2n1 : 1.03 6669.04 26.05 0.00 0.00 18977.63 12034.79 50522.30 00:10:14.240 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.240 Nvme2n2 : 1.03 6659.00 26.01 0.00 0.00 18927.41 11617.75 51237.24 00:10:14.240 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.240 Nvme2n3 : 1.03 6648.69 25.97 0.00 0.00 18902.97 10128.29 50998.92 00:10:14.241 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.241 Nvme3n1 : 1.03 6638.65 25.93 0.00 0.00 18893.68 9711.24 51237.24 00:10:14.241 =================================================================================================================== 00:10:14.241 Total : 46593.20 182.00 0.00 0.00 18994.93 8936.73 52905.43 00:10:14.241 [2024-02-14 19:10:51.581544] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:15.616 00:10:15.616 real 0m3.310s 00:10:15.616 user 0m2.976s 00:10:15.616 sys 0m0.211s 00:10:15.616 19:10:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:15.616 19:10:52 -- common/autotest_common.sh@10 -- # set +x 00:10:15.616 ************************************ 00:10:15.616 END TEST bdev_write_zeroes 00:10:15.616 ************************************ 00:10:15.616 19:10:52 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:15.616 19:10:52 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:10:15.616 19:10:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:15.616 19:10:52 -- common/autotest_common.sh@10 -- # set +x 00:10:15.616 ************************************ 00:10:15.616 START TEST bdev_json_nonenclosed 00:10:15.616 ************************************ 00:10:15.616 19:10:52 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:15.616 [2024-02-14 19:10:52.943216] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:15.616 [2024-02-14 19:10:52.943380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63946 ] 00:10:15.875 [2024-02-14 19:10:53.114666] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.132 [2024-02-14 19:10:53.368005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.132 [2024-02-14 19:10:53.368162] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:16.132 [2024-02-14 19:10:53.368381] json_config.c: 598:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:16.132 [2024-02-14 19:10:53.368453] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:16.132 [2024-02-14 19:10:53.368517] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:16.132 [2024-02-14 19:10:53.368625] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:16.698 00:10:16.698 real 0m0.987s 00:10:16.698 user 0m0.747s 00:10:16.698 sys 0m0.132s 00:10:16.698 19:10:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:16.698 19:10:53 -- common/autotest_common.sh@10 -- # set +x 00:10:16.698 ************************************ 00:10:16.698 END TEST bdev_json_nonenclosed 00:10:16.698 ************************************ 00:10:16.698 19:10:53 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:16.698 19:10:53 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:10:16.698 19:10:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:16.698 19:10:53 -- common/autotest_common.sh@10 -- # set +x 00:10:16.698 ************************************ 00:10:16.698 START TEST bdev_json_nonarray 00:10:16.698 ************************************ 00:10:16.698 19:10:53 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:16.698 [2024-02-14 19:10:53.974316] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:16.698 [2024-02-14 19:10:53.974499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63976 ] 00:10:16.956 [2024-02-14 19:10:54.144064] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.956 [2024-02-14 19:10:54.330119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.956 [2024-02-14 19:10:54.330220] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:16.956 [2024-02-14 19:10:54.330348] json_config.c: 604:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:16.956 [2024-02-14 19:10:54.330376] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:16.956 [2024-02-14 19:10:54.330390] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:16.956 [2024-02-14 19:10:54.330429] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:17.522 00:10:17.522 real 0m0.914s 00:10:17.522 user 0m0.671s 00:10:17.522 sys 0m0.136s 00:10:17.522 19:10:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:17.522 19:10:54 -- common/autotest_common.sh@10 -- # set +x 00:10:17.522 ************************************ 00:10:17.522 END TEST bdev_json_nonarray 00:10:17.522 ************************************ 00:10:17.522 19:10:54 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:10:17.522 19:10:54 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:10:17.522 19:10:54 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:17.522 19:10:54 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:17.522 19:10:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:17.522 19:10:54 -- common/autotest_common.sh@10 -- # set +x 00:10:17.522 ************************************ 00:10:17.522 START TEST bdev_gpt_uuid 00:10:17.522 ************************************ 00:10:17.522 19:10:54 -- common/autotest_common.sh@1102 -- # bdev_gpt_uuid 00:10:17.522 19:10:54 -- bdev/blockdev.sh@612 -- # local bdev 00:10:17.522 19:10:54 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:10:17.522 19:10:54 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=64003 00:10:17.522 19:10:54 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:17.522 19:10:54 -- bdev/blockdev.sh@47 -- # waitforlisten 64003 00:10:17.522 19:10:54 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:17.522 19:10:54 -- common/autotest_common.sh@817 -- # '[' -z 64003 ']' 00:10:17.522 19:10:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.522 19:10:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:17.522 19:10:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.522 19:10:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:17.522 19:10:54 -- common/autotest_common.sh@10 -- # set +x 00:10:17.779 [2024-02-14 19:10:54.954571] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:17.780 [2024-02-14 19:10:54.954724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64003 ] 00:10:17.780 [2024-02-14 19:10:55.122716] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.037 [2024-02-14 19:10:55.309302] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:18.037 [2024-02-14 19:10:55.309596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.410 19:10:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:19.410 19:10:56 -- common/autotest_common.sh@850 -- # return 0 00:10:19.410 19:10:56 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:19.410 19:10:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:19.410 19:10:56 -- common/autotest_common.sh@10 -- # set +x 00:10:19.667 Some configs were skipped because the RPC state that can call them passed over. 00:10:19.667 19:10:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:19.667 19:10:56 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:10:19.667 19:10:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:19.667 19:10:56 -- common/autotest_common.sh@10 -- # set +x 00:10:19.667 19:10:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:19.667 19:10:56 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:19.667 19:10:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:19.667 19:10:56 -- common/autotest_common.sh@10 -- # set +x 00:10:19.667 19:10:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:19.667 19:10:56 -- bdev/blockdev.sh@619 -- # bdev='[ 00:10:19.667 { 00:10:19.667 "name": "Nvme0n1p1", 00:10:19.667 "aliases": [ 00:10:19.667 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:19.667 ], 00:10:19.667 "product_name": "GPT Disk", 00:10:19.667 "block_size": 4096, 00:10:19.667 "num_blocks": 774144, 00:10:19.667 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:19.667 "md_size": 64, 00:10:19.667 "md_interleave": false, 00:10:19.667 "dif_type": 0, 00:10:19.667 "assigned_rate_limits": { 00:10:19.667 "rw_ios_per_sec": 0, 00:10:19.667 "rw_mbytes_per_sec": 0, 00:10:19.667 "r_mbytes_per_sec": 0, 00:10:19.667 "w_mbytes_per_sec": 0 00:10:19.667 }, 00:10:19.667 "claimed": false, 00:10:19.667 "zoned": false, 00:10:19.667 "supported_io_types": { 00:10:19.667 "read": true, 00:10:19.667 "write": true, 00:10:19.667 "unmap": true, 00:10:19.667 "write_zeroes": true, 00:10:19.667 "flush": true, 00:10:19.667 "reset": true, 00:10:19.667 "compare": true, 00:10:19.667 "compare_and_write": false, 00:10:19.667 "abort": true, 00:10:19.667 "nvme_admin": false, 00:10:19.667 "nvme_io": false 00:10:19.667 }, 00:10:19.667 "driver_specific": { 00:10:19.667 "gpt": { 00:10:19.667 "base_bdev": "Nvme0n1", 00:10:19.667 "offset_blocks": 256, 00:10:19.667 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:19.667 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:19.667 "partition_name": "SPDK_TEST_first" 00:10:19.667 } 00:10:19.667 } 00:10:19.667 } 00:10:19.667 ]' 00:10:19.667 19:10:56 -- bdev/blockdev.sh@620 -- # jq -r length 00:10:19.667 19:10:56 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:10:19.667 19:10:56 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:10:19.667 19:10:57 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:19.667 19:10:57 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:19.925 19:10:57 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:19.925 19:10:57 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:19.925 19:10:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:19.925 19:10:57 -- common/autotest_common.sh@10 -- # set +x 00:10:19.925 19:10:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:19.925 19:10:57 -- bdev/blockdev.sh@624 -- # bdev='[ 00:10:19.925 { 00:10:19.925 "name": "Nvme0n1p2", 00:10:19.925 "aliases": [ 00:10:19.925 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:19.925 ], 00:10:19.926 "product_name": "GPT Disk", 00:10:19.926 "block_size": 4096, 00:10:19.926 "num_blocks": 774143, 00:10:19.926 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:19.926 "md_size": 64, 00:10:19.926 "md_interleave": false, 00:10:19.926 "dif_type": 0, 00:10:19.926 "assigned_rate_limits": { 00:10:19.926 "rw_ios_per_sec": 0, 00:10:19.926 "rw_mbytes_per_sec": 0, 00:10:19.926 "r_mbytes_per_sec": 0, 00:10:19.926 "w_mbytes_per_sec": 0 00:10:19.926 }, 00:10:19.926 "claimed": false, 00:10:19.926 "zoned": false, 00:10:19.926 "supported_io_types": { 00:10:19.926 "read": true, 00:10:19.926 "write": true, 00:10:19.926 "unmap": true, 00:10:19.926 "write_zeroes": true, 00:10:19.926 "flush": true, 00:10:19.926 "reset": true, 00:10:19.926 "compare": true, 00:10:19.926 "compare_and_write": false, 00:10:19.926 "abort": true, 00:10:19.926 "nvme_admin": false, 00:10:19.926 "nvme_io": false 00:10:19.926 }, 00:10:19.926 "driver_specific": { 00:10:19.926 "gpt": { 00:10:19.926 "base_bdev": "Nvme0n1", 00:10:19.926 "offset_blocks": 774400, 00:10:19.926 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:19.926 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:19.926 "partition_name": "SPDK_TEST_second" 00:10:19.926 } 00:10:19.926 } 00:10:19.926 } 00:10:19.926 ]' 00:10:19.926 19:10:57 -- bdev/blockdev.sh@625 -- # jq -r length 00:10:19.926 19:10:57 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:10:19.926 19:10:57 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:10:19.926 19:10:57 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:19.926 19:10:57 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:19.926 19:10:57 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:19.926 19:10:57 -- bdev/blockdev.sh@629 -- # killprocess 64003 00:10:19.926 19:10:57 -- common/autotest_common.sh@924 -- # '[' -z 64003 ']' 00:10:19.926 19:10:57 -- common/autotest_common.sh@928 -- # kill -0 64003 00:10:19.926 19:10:57 -- common/autotest_common.sh@929 -- # uname 00:10:19.926 19:10:57 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:10:19.926 19:10:57 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 64003 00:10:19.926 19:10:57 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:10:19.926 19:10:57 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:10:19.926 killing process with pid 64003 00:10:19.926 19:10:57 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 64003' 00:10:19.926 19:10:57 -- common/autotest_common.sh@943 -- # kill 64003 00:10:19.926 19:10:57 -- common/autotest_common.sh@948 -- # wait 64003 00:10:22.458 00:10:22.458 real 0m4.524s 00:10:22.458 user 0m5.029s 00:10:22.458 sys 0m0.444s 00:10:22.458 19:10:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:22.458 19:10:59 -- common/autotest_common.sh@10 -- # set +x 00:10:22.458 ************************************ 00:10:22.458 END TEST bdev_gpt_uuid 00:10:22.458 ************************************ 00:10:22.458 19:10:59 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:10:22.458 19:10:59 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:10:22.458 19:10:59 -- bdev/blockdev.sh@809 -- # cleanup 00:10:22.458 19:10:59 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:22.458 19:10:59 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:22.458 19:10:59 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:10:22.458 19:10:59 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:10:22.458 19:10:59 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:10:22.458 19:10:59 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:22.458 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:22.715 Waiting for block devices as requested 00:10:22.715 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:22.715 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:22.972 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:22.972 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.235 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:28.235 19:11:05 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:10:28.235 19:11:05 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:10:28.235 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:28.235 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:10:28.235 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:28.235 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:10:28.235 19:11:05 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:10:28.235 00:10:28.235 real 1m6.615s 00:10:28.235 user 1m26.822s 00:10:28.235 sys 0m9.830s 00:10:28.235 19:11:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:28.235 ************************************ 00:10:28.235 END TEST blockdev_nvme_gpt 00:10:28.235 ************************************ 00:10:28.235 19:11:05 -- common/autotest_common.sh@10 -- # set +x 00:10:28.236 19:11:05 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:28.236 19:11:05 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:28.236 19:11:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:28.236 19:11:05 -- common/autotest_common.sh@10 -- # set +x 00:10:28.236 ************************************ 00:10:28.236 START TEST nvme 00:10:28.236 ************************************ 00:10:28.236 19:11:05 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:28.494 * Looking for test storage... 00:10:28.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:28.494 19:11:05 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:29.061 lsblk: /dev/nvme0c0n1: not a block device 00:10:29.319 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:29.578 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:29.578 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:29.578 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:29.578 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:29.578 19:11:06 -- nvme/nvme.sh@79 -- # uname 00:10:29.578 19:11:06 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:29.578 19:11:06 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:29.578 19:11:06 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:29.578 19:11:06 -- common/autotest_common.sh@1056 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:29.578 19:11:06 -- common/autotest_common.sh@1042 -- # _randomize_va_space=2 00:10:29.578 19:11:06 -- common/autotest_common.sh@1043 -- # echo 0 00:10:29.578 19:11:06 -- common/autotest_common.sh@1045 -- # stubpid=64702 00:10:29.578 19:11:06 -- common/autotest_common.sh@1044 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:29.578 Waiting for stub to ready for secondary processes... 00:10:29.578 19:11:06 -- common/autotest_common.sh@1046 -- # echo Waiting for stub to ready for secondary processes... 00:10:29.578 19:11:06 -- common/autotest_common.sh@1047 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:29.578 19:11:06 -- common/autotest_common.sh@1049 -- # [[ -e /proc/64702 ]] 00:10:29.578 19:11:06 -- common/autotest_common.sh@1050 -- # sleep 1s 00:10:29.578 [2024-02-14 19:11:06.995545] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:29.578 [2024-02-14 19:11:06.995729] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.513 [2024-02-14 19:11:07.781256] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.772 19:11:07 -- common/autotest_common.sh@1047 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:30.772 19:11:07 -- common/autotest_common.sh@1049 -- # [[ -e /proc/64702 ]] 00:10:30.772 19:11:07 -- common/autotest_common.sh@1050 -- # sleep 1s 00:10:30.772 [2024-02-14 19:11:07.988617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.772 [2024-02-14 19:11:07.988687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.772 [2024-02-14 19:11:07.988694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.772 [2024-02-14 19:11:08.012750] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:30.772 [2024-02-14 19:11:08.019282] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:30.772 [2024-02-14 19:11:08.019568] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:30.772 [2024-02-14 19:11:08.032898] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:30.772 [2024-02-14 19:11:08.033122] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:30.772 [2024-02-14 19:11:08.033315] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:30.772 [2024-02-14 19:11:08.045886] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:30.772 [2024-02-14 19:11:08.046136] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:30.772 [2024-02-14 19:11:08.046348] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:30.772 [2024-02-14 19:11:08.059628] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:30.772 [2024-02-14 19:11:08.059819] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:30.772 [2024-02-14 19:11:08.060070] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:30.772 [2024-02-14 19:11:08.060289] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:30.772 [2024-02-14 19:11:08.060583] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:31.706 19:11:08 -- common/autotest_common.sh@1047 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:31.706 done. 00:10:31.706 19:11:08 -- common/autotest_common.sh@1052 -- # echo done. 00:10:31.706 19:11:08 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:31.706 19:11:08 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:10:31.706 19:11:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:31.706 19:11:08 -- common/autotest_common.sh@10 -- # set +x 00:10:31.706 ************************************ 00:10:31.707 START TEST nvme_reset 00:10:31.707 ************************************ 00:10:31.707 19:11:08 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:31.965 Initializing NVMe Controllers 00:10:31.965 Skipping QEMU NVMe SSD at 0000:00:06.0 00:10:31.965 Skipping QEMU NVMe SSD at 0000:00:07.0 00:10:31.965 Skipping QEMU NVMe SSD at 0000:00:09.0 00:10:31.965 Skipping QEMU NVMe SSD at 0000:00:08.0 00:10:31.965 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:31.965 ************************************ 00:10:31.965 END TEST nvme_reset 00:10:31.965 ************************************ 00:10:31.965 00:10:31.965 real 0m0.290s 00:10:31.965 user 0m0.107s 00:10:31.965 sys 0m0.137s 00:10:31.965 19:11:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:31.965 19:11:09 -- common/autotest_common.sh@10 -- # set +x 00:10:31.965 19:11:09 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:31.965 19:11:09 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:31.965 19:11:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:31.965 19:11:09 -- common/autotest_common.sh@10 -- # set +x 00:10:31.965 ************************************ 00:10:31.965 START TEST nvme_identify 00:10:31.965 ************************************ 00:10:31.965 19:11:09 -- common/autotest_common.sh@1102 -- # nvme_identify 00:10:31.965 19:11:09 -- nvme/nvme.sh@12 -- # bdfs=() 00:10:31.965 19:11:09 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:31.965 19:11:09 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:31.965 19:11:09 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:31.965 19:11:09 -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:31.965 19:11:09 -- common/autotest_common.sh@1496 -- # local bdfs 00:10:31.965 19:11:09 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:31.965 19:11:09 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:31.965 19:11:09 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:31.965 19:11:09 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:31.965 19:11:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:31.965 19:11:09 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:32.226 [2024-02-14 19:11:09.629382] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 64744 terminated unexpected 00:10:32.226 ===================================================== 00:10:32.226 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:32.226 ===================================================== 00:10:32.226 Controller Capabilities/Features 00:10:32.226 ================================ 00:10:32.226 Vendor ID: 1b36 00:10:32.226 Subsystem Vendor ID: 1af4 00:10:32.226 Serial Number: 12340 00:10:32.226 Model Number: QEMU NVMe Ctrl 00:10:32.226 Firmware Version: 8.0.0 00:10:32.226 Recommended Arb Burst: 6 00:10:32.226 IEEE OUI Identifier: 00 54 52 00:10:32.226 Multi-path I/O 00:10:32.226 May have multiple subsystem ports: No 00:10:32.226 May have multiple controllers: No 00:10:32.226 Associated with SR-IOV VF: No 00:10:32.226 Max Data Transfer Size: 524288 00:10:32.226 Max Number of Namespaces: 256 00:10:32.226 Max Number of I/O Queues: 64 00:10:32.226 NVMe Specification Version (VS): 1.4 00:10:32.226 NVMe Specification Version (Identify): 1.4 00:10:32.226 Maximum Queue Entries: 2048 00:10:32.226 Contiguous Queues Required: Yes 00:10:32.226 Arbitration Mechanisms Supported 00:10:32.226 Weighted Round Robin: Not Supported 00:10:32.226 Vendor Specific: Not Supported 00:10:32.226 Reset Timeout: 7500 ms 00:10:32.226 Doorbell Stride: 4 bytes 00:10:32.226 NVM Subsystem Reset: Not Supported 00:10:32.226 Command Sets Supported 00:10:32.226 NVM Command Set: Supported 00:10:32.226 Boot Partition: Not Supported 00:10:32.226 Memory Page Size Minimum: 4096 bytes 00:10:32.226 Memory Page Size Maximum: 65536 bytes 00:10:32.226 Persistent Memory Region: Not Supported 00:10:32.226 Optional Asynchronous Events Supported 00:10:32.226 Namespace Attribute Notices: Supported 00:10:32.226 Firmware Activation Notices: Not Supported 00:10:32.226 ANA Change Notices: Not Supported 00:10:32.226 PLE Aggregate Log Change Notices: Not Supported 00:10:32.226 LBA Status Info Alert Notices: Not Supported 00:10:32.226 EGE Aggregate Log Change Notices: Not Supported 00:10:32.226 Normal NVM Subsystem Shutdown event: Not Supported 00:10:32.226 Zone Descriptor Change Notices: Not Supported 00:10:32.226 Discovery Log Change Notices: Not Supported 00:10:32.226 Controller Attributes 00:10:32.226 128-bit Host Identifier: Not Supported 00:10:32.226 Non-Operational Permissive Mode: Not Supported 00:10:32.226 NVM Sets: Not Supported 00:10:32.226 Read Recovery Levels: Not Supported 00:10:32.226 Endurance Groups: Not Supported 00:10:32.226 Predictable Latency Mode: Not Supported 00:10:32.226 Traffic Based Keep ALive: Not Supported 00:10:32.226 Namespace Granularity: Not Supported 00:10:32.226 SQ Associations: Not Supported 00:10:32.226 UUID List: Not Supported 00:10:32.226 Multi-Domain Subsystem: Not Supported 00:10:32.226 Fixed Capacity Management: Not Supported 00:10:32.226 Variable Capacity Management: Not Supported 00:10:32.226 Delete Endurance Group: Not Supported 00:10:32.226 Delete NVM Set: Not Supported 00:10:32.226 Extended LBA Formats Supported: Supported 00:10:32.226 Flexible Data Placement Supported: Not Supported 00:10:32.226 00:10:32.226 Controller Memory Buffer Support 00:10:32.226 ================================ 00:10:32.226 Supported: No 00:10:32.226 00:10:32.226 Persistent Memory Region Support 00:10:32.226 ================================ 00:10:32.226 Supported: No 00:10:32.226 00:10:32.226 Admin Command Set Attributes 00:10:32.226 ============================ 00:10:32.226 Security Send/Receive: Not Supported 00:10:32.226 Format NVM: Supported 00:10:32.226 Firmware Activate/Download: Not Supported 00:10:32.226 Namespace Management: Supported 00:10:32.226 Device Self-Test: Not Supported 00:10:32.226 Directives: Supported 00:10:32.226 NVMe-MI: Not Supported 00:10:32.226 Virtualization Management: Not Supported 00:10:32.226 Doorbell Buffer Config: Supported 00:10:32.226 Get LBA Status Capability: Not Supported 00:10:32.226 Command & Feature Lockdown Capability: Not Supported 00:10:32.226 Abort Command Limit: 4 00:10:32.226 Async Event Request Limit: 4 00:10:32.226 Number of Firmware Slots: N/A 00:10:32.226 Firmware Slot 1 Read-Only: N/A 00:10:32.226 Firmware Activation Without Reset: N/A 00:10:32.226 Multiple Update Detection Support: N/A 00:10:32.226 Firmware Update Granularity: No Information Provided 00:10:32.226 Per-Namespace SMART Log: Yes 00:10:32.226 Asymmetric Namespace Access Log Page: Not Supported 00:10:32.226 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:32.226 Command Effects Log Page: Supported 00:10:32.226 Get Log Page Extended Data: Supported 00:10:32.226 Telemetry Log Pages: Not Supported 00:10:32.226 Persistent Event Log Pages: Not Supported 00:10:32.226 Supported Log Pages Log Page: May Support 00:10:32.226 Commands Supported & Effects Log Page: Not Supported 00:10:32.226 Feature Identifiers & Effects Log Page:May Support 00:10:32.226 NVMe-MI Commands & Effects Log Page: May Support 00:10:32.226 Data Area 4 for Telemetry Log: Not Supported 00:10:32.226 Error Log Page Entries Supported: 1 00:10:32.226 Keep Alive: Not Supported 00:10:32.226 00:10:32.226 NVM Command Set Attributes 00:10:32.226 ========================== 00:10:32.226 Submission Queue Entry Size 00:10:32.226 Max: 64 00:10:32.226 Min: 64 00:10:32.226 Completion Queue Entry Size 00:10:32.226 Max: 16 00:10:32.226 Min: 16 00:10:32.226 Number of Namespaces: 256 00:10:32.226 Compare Command: Supported 00:10:32.226 Write Uncorrectable Command: Not Supported 00:10:32.226 Dataset Management Command: Supported 00:10:32.226 Write Zeroes Command: Supported 00:10:32.226 Set Features Save Field: Supported 00:10:32.226 Reservations: Not Supported 00:10:32.226 Timestamp: Supported 00:10:32.226 Copy: Supported 00:10:32.226 Volatile Write Cache: Present 00:10:32.226 Atomic Write Unit (Normal): 1 00:10:32.226 Atomic Write Unit (PFail): 1 00:10:32.226 Atomic Compare & Write Unit: 1 00:10:32.226 Fused Compare & Write: Not Supported 00:10:32.226 Scatter-Gather List 00:10:32.226 SGL Command Set: Supported 00:10:32.226 SGL Keyed: Not Supported 00:10:32.226 SGL Bit Bucket Descriptor: Not Supported 00:10:32.226 SGL Metadata Pointer: Not Supported 00:10:32.226 Oversized SGL: Not Supported 00:10:32.226 SGL Metadata Address: Not Supported 00:10:32.226 SGL Offset: Not Supported 00:10:32.226 Transport SGL Data Block: Not Supported 00:10:32.226 Replay Protected Memory Block: Not Supported 00:10:32.226 00:10:32.226 Firmware Slot Information 00:10:32.226 ========================= 00:10:32.227 Active slot: 1 00:10:32.227 Slot 1 Firmware Revision: 1.0 00:10:32.227 00:10:32.227 00:10:32.227 Commands Supported and Effects 00:10:32.227 ============================== 00:10:32.227 Admin Commands 00:10:32.227 -------------- 00:10:32.227 Delete I/O Submission Queue (00h): Supported 00:10:32.227 Create I/O Submission Queue (01h): Supported 00:10:32.227 Get Log Page (02h): Supported 00:10:32.227 Delete I/O Completion Queue (04h): Supported 00:10:32.227 Create I/O Completion Queue (05h): Supported 00:10:32.227 Identify (06h): Supported 00:10:32.227 Abort (08h): Supported 00:10:32.227 Set Features (09h): Supported 00:10:32.227 Get Features (0Ah): Supported 00:10:32.227 Asynchronous Event Request (0Ch): Supported 00:10:32.227 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:32.227 Directive Send (19h): Supported 00:10:32.227 Directive Receive (1Ah): Supported 00:10:32.227 Virtualization Management (1Ch): Supported 00:10:32.227 Doorbell Buffer Config (7Ch): Supported 00:10:32.227 Format NVM (80h): Supported LBA-Change 00:10:32.227 I/O Commands 00:10:32.227 ------------ 00:10:32.227 Flush (00h): Supported LBA-Change 00:10:32.227 Write (01h): Supported LBA-Change 00:10:32.227 Read (02h): Supported 00:10:32.227 Compare (05h): Supported 00:10:32.227 Write Zeroes (08h): Supported LBA-Change 00:10:32.227 Dataset Management (09h): Supported LBA-Change 00:10:32.227 Unknown (0Ch): Supported 00:10:32.227 Unknown (12h): Supported 00:10:32.227 Copy (19h): Supported LBA-Change 00:10:32.227 Unknown (1Dh): Supported LBA-Change 00:10:32.227 00:10:32.227 Error Log 00:10:32.227 ========= 00:10:32.227 00:10:32.227 Arbitration 00:10:32.227 =========== 00:10:32.227 Arbitration Burst: no limit 00:10:32.227 00:10:32.227 Power Management 00:10:32.227 ================ 00:10:32.227 Number of Power States: 1 00:10:32.227 Current Power State: Power State #0 00:10:32.227 Power State #0: 00:10:32.227 Max Power: 25.00 W 00:10:32.227 Non-Operational State: Operational 00:10:32.227 Entry Latency: 16 microseconds 00:10:32.227 Exit Latency: 4 microseconds 00:10:32.227 Relative Read Throughput: 0 00:10:32.227 Relative Read Latency: 0 00:10:32.227 Relative Write Throughput: 0 00:10:32.227 Relative Write Latency: 0 00:10:32.227 Idle Power[2024-02-14 19:11:09.630613] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 64744 terminated unexpected 00:10:32.227 : Not Reported 00:10:32.227 Active Power: Not Reported 00:10:32.227 Non-Operational Permissive Mode: Not Supported 00:10:32.227 00:10:32.227 Health Information 00:10:32.227 ================== 00:10:32.227 Critical Warnings: 00:10:32.227 Available Spare Space: OK 00:10:32.227 Temperature: OK 00:10:32.227 Device Reliability: OK 00:10:32.227 Read Only: No 00:10:32.227 Volatile Memory Backup: OK 00:10:32.227 Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.227 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:32.227 Available Spare: 0% 00:10:32.227 Available Spare Threshold: 0% 00:10:32.227 Life Percentage Used: 0% 00:10:32.227 Data Units Read: 1788 00:10:32.227 Data Units Written: 823 00:10:32.227 Host Read Commands: 88241 00:10:32.227 Host Write Commands: 43816 00:10:32.227 Controller Busy Time: 0 minutes 00:10:32.227 Power Cycles: 0 00:10:32.227 Power On Hours: 0 hours 00:10:32.227 Unsafe Shutdowns: 0 00:10:32.227 Unrecoverable Media Errors: 0 00:10:32.227 Lifetime Error Log Entries: 0 00:10:32.227 Warning Temperature Time: 0 minutes 00:10:32.227 Critical Temperature Time: 0 minutes 00:10:32.227 00:10:32.227 Number of Queues 00:10:32.227 ================ 00:10:32.227 Number of I/O Submission Queues: 64 00:10:32.227 Number of I/O Completion Queues: 64 00:10:32.227 00:10:32.227 ZNS Specific Controller Data 00:10:32.227 ============================ 00:10:32.227 Zone Append Size Limit: 0 00:10:32.227 00:10:32.227 00:10:32.227 Active Namespaces 00:10:32.227 ================= 00:10:32.227 Namespace ID:1 00:10:32.227 Error Recovery Timeout: Unlimited 00:10:32.227 Command Set Identifier: NVM (00h) 00:10:32.227 Deallocate: Supported 00:10:32.227 Deallocated/Unwritten Error: Supported 00:10:32.227 Deallocated Read Value: All 0x00 00:10:32.227 Deallocate in Write Zeroes: Not Supported 00:10:32.227 Deallocated Guard Field: 0xFFFF 00:10:32.227 Flush: Supported 00:10:32.227 Reservation: Not Supported 00:10:32.227 Metadata Transferred as: Separate Metadata Buffer 00:10:32.227 Namespace Sharing Capabilities: Private 00:10:32.227 Size (in LBAs): 1548666 (5GiB) 00:10:32.227 Capacity (in LBAs): 1548666 (5GiB) 00:10:32.227 Utilization (in LBAs): 1548666 (5GiB) 00:10:32.227 Thin Provisioning: Not Supported 00:10:32.227 Per-NS Atomic Units: No 00:10:32.227 Maximum Single Source Range Length: 128 00:10:32.227 Maximum Copy Length: 128 00:10:32.227 Maximum Source Range Count: 128 00:10:32.227 NGUID/EUI64 Never Reused: No 00:10:32.227 Namespace Write Protected: No 00:10:32.227 Number of LBA Formats: 8 00:10:32.227 Current LBA Format: LBA Format #07 00:10:32.227 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:32.227 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:32.227 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:32.227 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:32.227 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:32.227 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:32.227 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:32.227 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:32.227 00:10:32.227 ===================================================== 00:10:32.227 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:32.227 ===================================================== 00:10:32.227 Controller Capabilities/Features 00:10:32.227 ================================ 00:10:32.227 Vendor ID: 1b36 00:10:32.227 Subsystem Vendor ID: 1af4 00:10:32.227 Serial Number: 12341 00:10:32.227 Model Number: QEMU NVMe Ctrl 00:10:32.227 Firmware Version: 8.0.0 00:10:32.227 Recommended Arb Burst: 6 00:10:32.227 IEEE OUI Identifier: 00 54 52 00:10:32.227 Multi-path I/O 00:10:32.227 May have multiple subsystem ports: No 00:10:32.227 May have multiple controllers: No 00:10:32.227 Associated with SR-IOV VF: No 00:10:32.227 Max Data Transfer Size: 524288 00:10:32.227 Max Number of Namespaces: 256 00:10:32.227 Max Number of I/O Queues: 64 00:10:32.227 NVMe Specification Version (VS): 1.4 00:10:32.227 NVMe Specification Version (Identify): 1.4 00:10:32.227 Maximum Queue Entries: 2048 00:10:32.227 Contiguous Queues Required: Yes 00:10:32.227 Arbitration Mechanisms Supported 00:10:32.227 Weighted Round Robin: Not Supported 00:10:32.227 Vendor Specific: Not Supported 00:10:32.227 Reset Timeout: 7500 ms 00:10:32.227 Doorbell Stride: 4 bytes 00:10:32.227 NVM Subsystem Reset: Not Supported 00:10:32.227 Command Sets Supported 00:10:32.227 NVM Command Set: Supported 00:10:32.227 Boot Partition: Not Supported 00:10:32.227 Memory Page Size Minimum: 4096 bytes 00:10:32.227 Memory Page Size Maximum: 65536 bytes 00:10:32.227 Persistent Memory Region: Not Supported 00:10:32.227 Optional Asynchronous Events Supported 00:10:32.227 Namespace Attribute Notices: Supported 00:10:32.227 Firmware Activation Notices: Not Supported 00:10:32.227 ANA Change Notices: Not Supported 00:10:32.227 PLE Aggregate Log Change Notices: Not Supported 00:10:32.227 LBA Status Info Alert Notices: Not Supported 00:10:32.227 EGE Aggregate Log Change Notices: Not Supported 00:10:32.227 Normal NVM Subsystem Shutdown event: Not Supported 00:10:32.227 Zone Descriptor Change Notices: Not Supported 00:10:32.227 Discovery Log Change Notices: Not Supported 00:10:32.227 Controller Attributes 00:10:32.227 128-bit Host Identifier: Not Supported 00:10:32.227 Non-Operational Permissive Mode: Not Supported 00:10:32.227 NVM Sets: Not Supported 00:10:32.227 Read Recovery Levels: Not Supported 00:10:32.227 Endurance Groups: Not Supported 00:10:32.227 Predictable Latency Mode: Not Supported 00:10:32.227 Traffic Based Keep ALive: Not Supported 00:10:32.227 Namespace Granularity: Not Supported 00:10:32.227 SQ Associations: Not Supported 00:10:32.227 UUID List: Not Supported 00:10:32.227 Multi-Domain Subsystem: Not Supported 00:10:32.227 Fixed Capacity Management: Not Supported 00:10:32.227 Variable Capacity Management: Not Supported 00:10:32.227 Delete Endurance Group: Not Supported 00:10:32.227 Delete NVM Set: Not Supported 00:10:32.227 Extended LBA Formats Supported: Supported 00:10:32.227 Flexible Data Placement Supported: Not Supported 00:10:32.227 00:10:32.228 Controller Memory Buffer Support 00:10:32.228 ================================ 00:10:32.228 Supported: No 00:10:32.228 00:10:32.228 Persistent Memory Region Support 00:10:32.228 ================================ 00:10:32.228 Supported: No 00:10:32.228 00:10:32.228 Admin Command Set Attributes 00:10:32.228 ============================ 00:10:32.228 Security Send/Receive: Not Supported 00:10:32.228 Format NVM: Supported 00:10:32.228 Firmware Activate/Download: Not Supported 00:10:32.228 Namespace Management: Supported 00:10:32.228 Device Self-Test: Not Supported 00:10:32.228 Directives: Supported 00:10:32.228 NVMe-MI: Not Supported 00:10:32.228 Virtualization Management: Not Supported 00:10:32.228 Doorbell Buffer Config: Supported 00:10:32.228 Get LBA Status Capability: Not Supported 00:10:32.228 Command & Feature Lockdown Capability: Not Supported 00:10:32.228 Abort Command Limit: 4 00:10:32.228 Async Event Request Limit: 4 00:10:32.228 Number of Firmware Slots: N/A 00:10:32.228 Firmware Slot 1 Read-Only: N/A 00:10:32.228 Firmware Activation Without Reset: N/A 00:10:32.228 Multiple Update Detection Support: N/A 00:10:32.228 Firmware Update Granularity: No Information Provided 00:10:32.228 Per-Namespace SMART Log: Yes 00:10:32.228 Asymmetric Namespace Access Log Page: Not Supported 00:10:32.228 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:32.228 Command Effects Log Page: Supported 00:10:32.228 Get Log Page Extended Data: Supported 00:10:32.228 Telemetry Log Pages: Not Supported 00:10:32.228 Persistent Event Log Pages: Not Supported 00:10:32.228 Supported Log Pages Log Page: May Support 00:10:32.228 Commands Supported & Effects Log Page: Not Supported 00:10:32.228 Feature Identifiers & Effects Log Page:May Support 00:10:32.228 NVMe-MI Commands & Effects Log Page: May Support 00:10:32.228 Data Area 4 for Telemetry Log: Not Supported 00:10:32.228 Error Log Page Entries Supported: 1 00:10:32.228 Keep Alive: Not Supported 00:10:32.228 00:10:32.228 NVM Command Set Attributes 00:10:32.228 ========================== 00:10:32.228 Submission Queue Entry Size 00:10:32.228 Max: 64 00:10:32.228 Min: 64 00:10:32.228 Completion Queue Entry Size 00:10:32.228 Max: 16 00:10:32.228 Min: 16 00:10:32.228 Number of Namespaces: 256 00:10:32.228 Compare Command: Supported 00:10:32.228 Write Uncorrectable Command: Not Supported 00:10:32.228 Dataset Management Command: Supported 00:10:32.228 Write Zeroes Command: Supported 00:10:32.228 Set Features Save Field: Supported 00:10:32.228 Reservations: Not Supported 00:10:32.228 Timestamp: Supported 00:10:32.228 Copy: Supported 00:10:32.228 Volatile Write Cache: Present 00:10:32.228 Atomic Write Unit (Normal): 1 00:10:32.228 Atomic Write Unit (PFail): 1 00:10:32.228 Atomic Compare & Write Unit: 1 00:10:32.228 Fused Compare & Write: Not Supported 00:10:32.228 Scatter-Gather List 00:10:32.228 SGL Command Set: Supported 00:10:32.228 SGL Keyed: Not Supported 00:10:32.228 SGL Bit Bucket Descriptor: Not Supported 00:10:32.228 SGL Metadata Pointer: Not Supported 00:10:32.228 Oversized SGL: Not Supported 00:10:32.228 SGL Metadata Address: Not Supported 00:10:32.228 SGL Offset: Not Supported 00:10:32.228 Transport SGL Data Block: Not Supported 00:10:32.228 Replay Protected Memory Block: Not Supported 00:10:32.228 00:10:32.228 Firmware Slot Information 00:10:32.228 ========================= 00:10:32.228 Active slot: 1 00:10:32.228 Slot 1 Firmware Revision: 1.0 00:10:32.228 00:10:32.228 00:10:32.228 Commands Supported and Effects 00:10:32.228 ============================== 00:10:32.228 Admin Commands 00:10:32.228 -------------- 00:10:32.228 Delete I/O Submission Queue (00h): Supported 00:10:32.228 Create I/O Submission Queue (01h): Supported 00:10:32.228 Get Log Page (02h): Supported 00:10:32.228 Delete I/O Completion Queue (04h): Supported 00:10:32.228 Create I/O Completion Queue (05h): Supported 00:10:32.228 Identify (06h): Supported 00:10:32.228 Abort (08h): Supported 00:10:32.228 Set Features (09h): Supported 00:10:32.228 Get Features (0Ah): Supported 00:10:32.228 Asynchronous Event Request (0Ch): Supported 00:10:32.228 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:32.228 Directive Send (19h): Supported 00:10:32.228 Directive Receive (1Ah): Supported 00:10:32.228 Virtualization Management (1Ch): Supported 00:10:32.228 Doorbell Buffer Config (7Ch): Supported 00:10:32.228 Format NVM (80h): Supported LBA-Change 00:10:32.228 I/O Commands 00:10:32.228 ------------ 00:10:32.228 Flush (00h): Supported LBA-Change 00:10:32.228 Write (01h): Supported LBA-Change 00:10:32.228 Read (02h): Supported 00:10:32.228 Compare (05h): Supported 00:10:32.228 Write Zeroes (08h): Supported LBA-Change 00:10:32.228 Dataset Management (09h): Supported LBA-Change 00:10:32.228 Unknown (0Ch): Supported 00:10:32.228 Unknown (12h): Supported 00:10:32.228 Copy (19h): Supported LBA-Change 00:10:32.228 Unknown (1Dh): Supported LBA-Change 00:10:32.228 00:10:32.228 Error Log 00:10:32.228 ========= 00:10:32.228 00:10:32.228 Arbitration 00:10:32.228 =========== 00:10:32.228 Arbitration Burst: no limit 00:10:32.228 00:10:32.228 Power Management 00:10:32.228 ================ 00:10:32.228 Number of Power States: 1 00:10:32.228 Current Power State: Power State #0 00:10:32.228 Power State #0: 00:10:32.228 Max Power: 25.00 W 00:10:32.228 Non-Operational State: Operational 00:10:32.228 Entry Latency: 16 microseconds 00:10:32.228 Exit Latency: 4 microseconds 00:10:32.228 Relative Read Throughput: 0 00:10:32.228 Relative Read Latency: 0 00:10:32.228 Relative Write Throughput: 0 00:10:32.228 Relative Write Latency: 0 00:10:32.228 Idle Power: Not Reported 00:10:32.228 Active Power: Not Reported 00:10:32.228 Non-Operational Permissive Mode: Not Supported 00:10:32.228 00:10:32.228 Health Information 00:10:32.228 ================== 00:10:32.228 Critical Warnings: 00:10:32.228 Available Spare Space: OK 00:10:32.228 Temperature: OK 00:10:32.228 Device Reliability: OK 00:10:32.228 Read Only: No 00:10:32.228 Volatile Memory Backup: OK 00:10:32.228 Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.228 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:32.228 Available Spare: 0% 00:10:32.228 Available Spare Threshold: 0% 00:10:32.228 Life Percentage Used: 0% 00:10:32.228 Data Units Read: 1217 00:10:32.228 Data Units Written: 566 00:10:32.228 Host Read Commands: 60966 00:10:32.228 Host Write Commands: 30036 00:10:32.228 Controller Busy Time: 0 minutes 00:10:32.228 Power Cycles: 0 00:10:32.228 Power On Hours: 0 hours 00:10:32.228 Unsafe Shutdowns: 0 00:10:32.228 Unrecoverable Media Errors: 0 00:10:32.228 Lifetime Error Log Entries: 0 00:10:32.228 Warning Temperature Time: 0 minutes 00:10:32.228 Critical Temperature Time: 0 minutes 00:10:32.228 00:10:32.228 Number of Queues 00:10:32.228 ================ 00:10:32.228 Number of I/O Submission Queues: 64 00:10:32.228 Number of I/O Completion Queues: 64 00:10:32.228 00:10:32.228 ZNS Specific Controller Data 00:10:32.228 ============================ 00:10:32.228 Zone Append Size Limit: 0 00:10:32.228 00:10:32.228 00:10:32.228 Active Namespaces 00:10:32.228 ================= 00:10:32.228 Namespace ID:1 00:10:32.228 Error Recovery Timeout: Unlimited 00:10:32.228 Command Set Identifier: [2024-02-14 19:11:09.631583] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 64744 terminated unexpected 00:10:32.228 NVM (00h) 00:10:32.228 Deallocate: Supported 00:10:32.228 Deallocated/Unwritten Error: Supported 00:10:32.228 Deallocated Read Value: All 0x00 00:10:32.228 Deallocate in Write Zeroes: Not Supported 00:10:32.228 Deallocated Guard Field: 0xFFFF 00:10:32.228 Flush: Supported 00:10:32.229 Reservation: Not Supported 00:10:32.229 Namespace Sharing Capabilities: Private 00:10:32.229 Size (in LBAs): 1310720 (5GiB) 00:10:32.229 Capacity (in LBAs): 1310720 (5GiB) 00:10:32.229 Utilization (in LBAs): 1310720 (5GiB) 00:10:32.229 Thin Provisioning: Not Supported 00:10:32.229 Per-NS Atomic Units: No 00:10:32.229 Maximum Single Source Range Length: 128 00:10:32.229 Maximum Copy Length: 128 00:10:32.229 Maximum Source Range Count: 128 00:10:32.229 NGUID/EUI64 Never Reused: No 00:10:32.229 Namespace Write Protected: No 00:10:32.229 Number of LBA Formats: 8 00:10:32.229 Current LBA Format: LBA Format #04 00:10:32.229 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:32.229 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:32.229 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:32.229 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:32.229 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:32.229 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:32.229 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:32.229 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:32.229 00:10:32.229 ===================================================== 00:10:32.229 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:32.229 ===================================================== 00:10:32.229 Controller Capabilities/Features 00:10:32.229 ================================ 00:10:32.229 Vendor ID: 1b36 00:10:32.229 Subsystem Vendor ID: 1af4 00:10:32.229 Serial Number: 12343 00:10:32.229 Model Number: QEMU NVMe Ctrl 00:10:32.229 Firmware Version: 8.0.0 00:10:32.229 Recommended Arb Burst: 6 00:10:32.229 IEEE OUI Identifier: 00 54 52 00:10:32.229 Multi-path I/O 00:10:32.229 May have multiple subsystem ports: No 00:10:32.229 May have multiple controllers: Yes 00:10:32.229 Associated with SR-IOV VF: No 00:10:32.229 Max Data Transfer Size: 524288 00:10:32.229 Max Number of Namespaces: 256 00:10:32.229 Max Number of I/O Queues: 64 00:10:32.229 NVMe Specification Version (VS): 1.4 00:10:32.229 NVMe Specification Version (Identify): 1.4 00:10:32.229 Maximum Queue Entries: 2048 00:10:32.229 Contiguous Queues Required: Yes 00:10:32.229 Arbitration Mechanisms Supported 00:10:32.229 Weighted Round Robin: Not Supported 00:10:32.229 Vendor Specific: Not Supported 00:10:32.229 Reset Timeout: 7500 ms 00:10:32.229 Doorbell Stride: 4 bytes 00:10:32.229 NVM Subsystem Reset: Not Supported 00:10:32.229 Command Sets Supported 00:10:32.229 NVM Command Set: Supported 00:10:32.229 Boot Partition: Not Supported 00:10:32.229 Memory Page Size Minimum: 4096 bytes 00:10:32.229 Memory Page Size Maximum: 65536 bytes 00:10:32.229 Persistent Memory Region: Not Supported 00:10:32.229 Optional Asynchronous Events Supported 00:10:32.229 Namespace Attribute Notices: Supported 00:10:32.229 Firmware Activation Notices: Not Supported 00:10:32.229 ANA Change Notices: Not Supported 00:10:32.229 PLE Aggregate Log Change Notices: Not Supported 00:10:32.229 LBA Status Info Alert Notices: Not Supported 00:10:32.229 EGE Aggregate Log Change Notices: Not Supported 00:10:32.229 Normal NVM Subsystem Shutdown event: Not Supported 00:10:32.229 Zone Descriptor Change Notices: Not Supported 00:10:32.229 Discovery Log Change Notices: Not Supported 00:10:32.229 Controller Attributes 00:10:32.229 128-bit Host Identifier: Not Supported 00:10:32.229 Non-Operational Permissive Mode: Not Supported 00:10:32.229 NVM Sets: Not Supported 00:10:32.229 Read Recovery Levels: Not Supported 00:10:32.229 Endurance Groups: Supported 00:10:32.229 Predictable Latency Mode: Not Supported 00:10:32.229 Traffic Based Keep ALive: Not Supported 00:10:32.229 Namespace Granularity: Not Supported 00:10:32.229 SQ Associations: Not Supported 00:10:32.229 UUID List: Not Supported 00:10:32.229 Multi-Domain Subsystem: Not Supported 00:10:32.229 Fixed Capacity Management: Not Supported 00:10:32.229 Variable Capacity Management: Not Supported 00:10:32.229 Delete Endurance Group: Not Supported 00:10:32.229 Delete NVM Set: Not Supported 00:10:32.229 Extended LBA Formats Supported: Supported 00:10:32.229 Flexible Data Placement Supported: Supported 00:10:32.229 00:10:32.229 Controller Memory Buffer Support 00:10:32.229 ================================ 00:10:32.229 Supported: No 00:10:32.229 00:10:32.229 Persistent Memory Region Support 00:10:32.229 ================================ 00:10:32.229 Supported: No 00:10:32.229 00:10:32.229 Admin Command Set Attributes 00:10:32.229 ============================ 00:10:32.229 Security Send/Receive: Not Supported 00:10:32.229 Format NVM: Supported 00:10:32.229 Firmware Activate/Download: Not Supported 00:10:32.229 Namespace Management: Supported 00:10:32.229 Device Self-Test: Not Supported 00:10:32.229 Directives: Supported 00:10:32.229 NVMe-MI: Not Supported 00:10:32.229 Virtualization Management: Not Supported 00:10:32.229 Doorbell Buffer Config: Supported 00:10:32.229 Get LBA Status Capability: Not Supported 00:10:32.229 Command & Feature Lockdown Capability: Not Supported 00:10:32.229 Abort Command Limit: 4 00:10:32.229 Async Event Request Limit: 4 00:10:32.229 Number of Firmware Slots: N/A 00:10:32.229 Firmware Slot 1 Read-Only: N/A 00:10:32.229 Firmware Activation Without Reset: N/A 00:10:32.229 Multiple Update Detection Support: N/A 00:10:32.229 Firmware Update Granularity: No Information Provided 00:10:32.229 Per-Namespace SMART Log: Yes 00:10:32.229 Asymmetric Namespace Access Log Page: Not Supported 00:10:32.229 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:32.229 Command Effects Log Page: Supported 00:10:32.229 Get Log Page Extended Data: Supported 00:10:32.229 Telemetry Log Pages: Not Supported 00:10:32.229 Persistent Event Log Pages: Not Supported 00:10:32.229 Supported Log Pages Log Page: May Support 00:10:32.229 Commands Supported & Effects Log Page: Not Supported 00:10:32.229 Feature Identifiers & Effects Log Page:May Support 00:10:32.229 NVMe-MI Commands & Effects Log Page: May Support 00:10:32.229 Data Area 4 for Telemetry Log: Not Supported 00:10:32.229 Error Log Page Entries Supported: 1 00:10:32.229 Keep Alive: Not Supported 00:10:32.229 00:10:32.229 NVM Command Set Attributes 00:10:32.229 ========================== 00:10:32.229 Submission Queue Entry Size 00:10:32.229 Max: 64 00:10:32.229 Min: 64 00:10:32.229 Completion Queue Entry Size 00:10:32.229 Max: 16 00:10:32.229 Min: 16 00:10:32.229 Number of Namespaces: 256 00:10:32.229 Compare Command: Supported 00:10:32.229 Write Uncorrectable Command: Not Supported 00:10:32.229 Dataset Management Command: Supported 00:10:32.229 Write Zeroes Command: Supported 00:10:32.229 Set Features Save Field: Supported 00:10:32.229 Reservations: Not Supported 00:10:32.229 Timestamp: Supported 00:10:32.229 Copy: Supported 00:10:32.229 Volatile Write Cache: Present 00:10:32.229 Atomic Write Unit (Normal): 1 00:10:32.229 Atomic Write Unit (PFail): 1 00:10:32.229 Atomic Compare & Write Unit: 1 00:10:32.229 Fused Compare & Write: Not Supported 00:10:32.229 Scatter-Gather List 00:10:32.229 SGL Command Set: Supported 00:10:32.229 SGL Keyed: Not Supported 00:10:32.229 SGL Bit Bucket Descriptor: Not Supported 00:10:32.229 SGL Metadata Pointer: Not Supported 00:10:32.229 Oversized SGL: Not Supported 00:10:32.229 SGL Metadata Address: Not Supported 00:10:32.229 SGL Offset: Not Supported 00:10:32.229 Transport SGL Data Block: Not Supported 00:10:32.229 Replay Protected Memory Block: Not Supported 00:10:32.229 00:10:32.229 Firmware Slot Information 00:10:32.229 ========================= 00:10:32.230 Active slot: 1 00:10:32.230 Slot 1 Firmware Revision: 1.0 00:10:32.230 00:10:32.230 00:10:32.230 Commands Supported and Effects 00:10:32.230 ============================== 00:10:32.230 Admin Commands 00:10:32.230 -------------- 00:10:32.230 Delete I/O Submission Queue (00h): Supported 00:10:32.230 Create I/O Submission Queue (01h): Supported 00:10:32.230 Get Log Page (02h): Supported 00:10:32.230 Delete I/O Completion Queue (04h): Supported 00:10:32.230 Create I/O Completion Queue (05h): Supported 00:10:32.230 Identify (06h): Supported 00:10:32.230 Abort (08h): Supported 00:10:32.230 Set Features (09h): Supported 00:10:32.230 Get Features (0Ah): Supported 00:10:32.230 Asynchronous Event Request (0Ch): Supported 00:10:32.230 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:32.230 Directive Send (19h): Supported 00:10:32.230 Directive Receive (1Ah): Supported 00:10:32.230 Virtualization Management (1Ch): Supported 00:10:32.230 Doorbell Buffer Config (7Ch): Supported 00:10:32.230 Format NVM (80h): Supported LBA-Change 00:10:32.230 I/O Commands 00:10:32.230 ------------ 00:10:32.230 Flush (00h): Supported LBA-Change 00:10:32.230 Write (01h): Supported LBA-Change 00:10:32.230 Read (02h): Supported 00:10:32.230 Compare (05h): Supported 00:10:32.230 Write Zeroes (08h): Supported LBA-Change 00:10:32.230 Dataset Management (09h): Supported LBA-Change 00:10:32.230 Unknown (0Ch): Supported 00:10:32.230 Unknown (12h): Supported 00:10:32.230 Copy (19h): Supported LBA-Change 00:10:32.230 Unknown (1Dh): Supported LBA-Change 00:10:32.230 00:10:32.230 Error Log 00:10:32.230 ========= 00:10:32.230 00:10:32.230 Arbitration 00:10:32.230 =========== 00:10:32.230 Arbitration Burst: no limit 00:10:32.230 00:10:32.230 Power Management 00:10:32.230 ================ 00:10:32.230 Number of Power States: 1 00:10:32.230 Current Power State: Power State #0 00:10:32.230 Power State #0: 00:10:32.230 Max Power: 25.00 W 00:10:32.230 Non-Operational State: Operational 00:10:32.230 Entry Latency: 16 microseconds 00:10:32.230 Exit Latency: 4 microseconds 00:10:32.230 Relative Read Throughput: 0 00:10:32.230 Relative Read Latency: 0 00:10:32.230 Relative Write Throughput: 0 00:10:32.230 Relative Write Latency: 0 00:10:32.230 Idle Power: Not Reported 00:10:32.230 Active Power: Not Reported 00:10:32.230 Non-Operational Permissive Mode: Not Supported 00:10:32.230 00:10:32.230 Health Information 00:10:32.230 ================== 00:10:32.230 Critical Warnings: 00:10:32.230 Available Spare Space: OK 00:10:32.230 Temperature: OK 00:10:32.230 Device Reliability: OK 00:10:32.230 Read Only: No 00:10:32.230 Volatile Memory Backup: OK 00:10:32.230 Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.230 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:32.230 Available Spare: 0% 00:10:32.230 Available Spare Threshold: 0% 00:10:32.230 Life Percentage Used: 0% 00:10:32.230 Data Units Read: 1275 00:10:32.230 Data Units Written: 606 00:10:32.230 Host Read Commands: 60934 00:10:32.230 Host Write Commands: 30384 00:10:32.230 Controller Busy Time: 0 minutes 00:10:32.230 Power Cycles: 0 00:10:32.230 Power On Hours: 0 hours 00:10:32.230 Unsafe Shutdowns: 0 00:10:32.230 Unrecoverable Media Errors: 0 00:10:32.230 Lifetime Error Log Entries: 0 00:10:32.230 Warning Temperature Time: 0 minutes 00:10:32.230 Critical Temperature Time: 0 minutes 00:10:32.230 00:10:32.230 Number of Queues 00:10:32.230 ================ 00:10:32.230 Number of I/O Submission Queues: 64 00:10:32.230 Number of I/O Completion Queues: 64 00:10:32.230 00:10:32.230 ZNS Specific Controller Data 00:10:32.230 ============================ 00:10:32.230 Zone Append Size Limit: 0 00:10:32.230 00:10:32.230 00:10:32.230 Active Namespaces 00:10:32.230 ================= 00:10:32.230 Namespace ID:1 00:10:32.230 Error Recovery Timeout: Unlimited 00:10:32.230 Command Set Identifier: NVM (00h) 00:10:32.230 Deallocate: Supported 00:10:32.230 Deallocated/Unwritten Error: Supported 00:10:32.230 Deallocated Read Value: All 0x00 00:10:32.230 Deallocate in Write Zeroes: Not Supported 00:10:32.230 Deallocated Guard Field: 0xFFFF 00:10:32.230 Flush: Supported 00:10:32.230 Reservation: Not Supported 00:10:32.230 Namespace Sharing Capabilities: Multiple Controllers 00:10:32.230 Size (in LBAs): 262144 (1GiB) 00:10:32.230 Capacity (in LBAs): 262144 (1GiB) 00:10:32.230 Utilization (in LBAs): 262144 (1GiB) 00:10:32.230 Thin Provisioning: Not Supported 00:10:32.230 Per-NS Atomic Units: No 00:10:32.230 Maximum Single Source Range Length: 128 00:10:32.230 Maximum Copy Length: 128 00:10:32.230 Maximum Source Range Count: 128 00:10:32.230 NGUID/EUI64 Never Reused: No 00:10:32.230 Namespace Write Protected: No 00:10:32.230 Endurance group ID: 1 00:10:32.230 Number of LBA Formats: 8 00:10:32.230 Current LBA Format: LBA Format #04 00:10:32.230 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:32.230 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:32.230 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:32.230 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:32.230 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:32.230 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:32.230 LBA Format #06: Data Si[2024-02-14 19:11:09.632994] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 64744 terminated unexpected 00:10:32.230 ze: 4096 Metadata Size: 16 00:10:32.230 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:32.230 00:10:32.230 Get Feature FDP: 00:10:32.230 ================ 00:10:32.230 Enabled: Yes 00:10:32.230 FDP configuration index: 0 00:10:32.230 00:10:32.230 FDP configurations log page 00:10:32.230 =========================== 00:10:32.230 Number of FDP configurations: 1 00:10:32.230 Version: 0 00:10:32.230 Size: 112 00:10:32.230 FDP Configuration Descriptor: 0 00:10:32.230 Descriptor Size: 96 00:10:32.230 Reclaim Group Identifier format: 2 00:10:32.230 FDP Volatile Write Cache: Not Present 00:10:32.230 FDP Configuration: Valid 00:10:32.230 Vendor Specific Size: 0 00:10:32.230 Number of Reclaim Groups: 2 00:10:32.230 Number of Recalim Unit Handles: 8 00:10:32.230 Max Placement Identifiers: 128 00:10:32.230 Number of Namespaces Suppprted: 256 00:10:32.230 Reclaim unit Nominal Size: 6000000 bytes 00:10:32.230 Estimated Reclaim Unit Time Limit: Not Reported 00:10:32.230 RUH Desc #000: RUH Type: Initially Isolated 00:10:32.230 RUH Desc #001: RUH Type: Initially Isolated 00:10:32.230 RUH Desc #002: RUH Type: Initially Isolated 00:10:32.230 RUH Desc #003: RUH Type: Initially Isolated 00:10:32.230 RUH Desc #004: RUH Type: Initially Isolated 00:10:32.230 RUH Desc #005: RUH Type: Initially Isolated 00:10:32.230 RUH Desc #006: RUH Type: Initially Isolated 00:10:32.230 RUH Desc #007: RUH Type: Initially Isolated 00:10:32.230 00:10:32.230 FDP reclaim unit handle usage log page 00:10:32.230 ====================================== 00:10:32.230 Number of Reclaim Unit Handles: 8 00:10:32.230 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:32.230 RUH Usage Desc #001: RUH Attributes: Unused 00:10:32.230 RUH Usage Desc #002: RUH Attributes: Unused 00:10:32.230 RUH Usage Desc #003: RUH Attributes: Unused 00:10:32.230 RUH Usage Desc #004: RUH Attributes: Unused 00:10:32.230 RUH Usage Desc #005: RUH Attributes: Unused 00:10:32.230 RUH Usage Desc #006: RUH Attributes: Unused 00:10:32.230 RUH Usage Desc #007: RUH Attributes: Unused 00:10:32.230 00:10:32.230 FDP statistics log page 00:10:32.230 ======================= 00:10:32.230 Host bytes with metadata written: 390017024 00:10:32.230 Media bytes with metadata written: 390090752 00:10:32.230 Media bytes erased: 0 00:10:32.230 00:10:32.230 FDP events log page 00:10:32.230 =================== 00:10:32.230 Number of FDP events: 0 00:10:32.230 00:10:32.230 ===================================================== 00:10:32.230 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:32.230 ===================================================== 00:10:32.230 Controller Capabilities/Features 00:10:32.230 ================================ 00:10:32.230 Vendor ID: 1b36 00:10:32.230 Subsystem Vendor ID: 1af4 00:10:32.230 Serial Number: 12342 00:10:32.230 Model Number: QEMU NVMe Ctrl 00:10:32.230 Firmware Version: 8.0.0 00:10:32.230 Recommended Arb Burst: 6 00:10:32.230 IEEE OUI Identifier: 00 54 52 00:10:32.230 Multi-path I/O 00:10:32.230 May have multiple subsystem ports: No 00:10:32.230 May have multiple controllers: No 00:10:32.231 Associated with SR-IOV VF: No 00:10:32.231 Max Data Transfer Size: 524288 00:10:32.231 Max Number of Namespaces: 256 00:10:32.231 Max Number of I/O Queues: 64 00:10:32.231 NVMe Specification Version (VS): 1.4 00:10:32.231 NVMe Specification Version (Identify): 1.4 00:10:32.231 Maximum Queue Entries: 2048 00:10:32.231 Contiguous Queues Required: Yes 00:10:32.231 Arbitration Mechanisms Supported 00:10:32.231 Weighted Round Robin: Not Supported 00:10:32.231 Vendor Specific: Not Supported 00:10:32.231 Reset Timeout: 7500 ms 00:10:32.231 Doorbell Stride: 4 bytes 00:10:32.231 NVM Subsystem Reset: Not Supported 00:10:32.231 Command Sets Supported 00:10:32.231 NVM Command Set: Supported 00:10:32.231 Boot Partition: Not Supported 00:10:32.231 Memory Page Size Minimum: 4096 bytes 00:10:32.231 Memory Page Size Maximum: 65536 bytes 00:10:32.231 Persistent Memory Region: Not Supported 00:10:32.231 Optional Asynchronous Events Supported 00:10:32.231 Namespace Attribute Notices: Supported 00:10:32.231 Firmware Activation Notices: Not Supported 00:10:32.231 ANA Change Notices: Not Supported 00:10:32.231 PLE Aggregate Log Change Notices: Not Supported 00:10:32.231 LBA Status Info Alert Notices: Not Supported 00:10:32.231 EGE Aggregate Log Change Notices: Not Supported 00:10:32.231 Normal NVM Subsystem Shutdown event: Not Supported 00:10:32.231 Zone Descriptor Change Notices: Not Supported 00:10:32.231 Discovery Log Change Notices: Not Supported 00:10:32.231 Controller Attributes 00:10:32.231 128-bit Host Identifier: Not Supported 00:10:32.231 Non-Operational Permissive Mode: Not Supported 00:10:32.231 NVM Sets: Not Supported 00:10:32.231 Read Recovery Levels: Not Supported 00:10:32.231 Endurance Groups: Not Supported 00:10:32.231 Predictable Latency Mode: Not Supported 00:10:32.231 Traffic Based Keep ALive: Not Supported 00:10:32.231 Namespace Granularity: Not Supported 00:10:32.231 SQ Associations: Not Supported 00:10:32.231 UUID List: Not Supported 00:10:32.231 Multi-Domain Subsystem: Not Supported 00:10:32.231 Fixed Capacity Management: Not Supported 00:10:32.231 Variable Capacity Management: Not Supported 00:10:32.231 Delete Endurance Group: Not Supported 00:10:32.231 Delete NVM Set: Not Supported 00:10:32.231 Extended LBA Formats Supported: Supported 00:10:32.231 Flexible Data Placement Supported: Not Supported 00:10:32.231 00:10:32.231 Controller Memory Buffer Support 00:10:32.231 ================================ 00:10:32.231 Supported: No 00:10:32.231 00:10:32.231 Persistent Memory Region Support 00:10:32.231 ================================ 00:10:32.231 Supported: No 00:10:32.231 00:10:32.231 Admin Command Set Attributes 00:10:32.231 ============================ 00:10:32.231 Security Send/Receive: Not Supported 00:10:32.231 Format NVM: Supported 00:10:32.231 Firmware Activate/Download: Not Supported 00:10:32.231 Namespace Management: Supported 00:10:32.231 Device Self-Test: Not Supported 00:10:32.231 Directives: Supported 00:10:32.231 NVMe-MI: Not Supported 00:10:32.231 Virtualization Management: Not Supported 00:10:32.231 Doorbell Buffer Config: Supported 00:10:32.231 Get LBA Status Capability: Not Supported 00:10:32.231 Command & Feature Lockdown Capability: Not Supported 00:10:32.231 Abort Command Limit: 4 00:10:32.231 Async Event Request Limit: 4 00:10:32.231 Number of Firmware Slots: N/A 00:10:32.231 Firmware Slot 1 Read-Only: N/A 00:10:32.231 Firmware Activation Without Reset: N/A 00:10:32.231 Multiple Update Detection Support: N/A 00:10:32.231 Firmware Update Granularity: No Information Provided 00:10:32.231 Per-Namespace SMART Log: Yes 00:10:32.231 Asymmetric Namespace Access Log Page: Not Supported 00:10:32.231 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:32.231 Command Effects Log Page: Supported 00:10:32.231 Get Log Page Extended Data: Supported 00:10:32.231 Telemetry Log Pages: Not Supported 00:10:32.231 Persistent Event Log Pages: Not Supported 00:10:32.231 Supported Log Pages Log Page: May Support 00:10:32.231 Commands Supported & Effects Log Page: Not Supported 00:10:32.231 Feature Identifiers & Effects Log Page:May Support 00:10:32.231 NVMe-MI Commands & Effects Log Page: May Support 00:10:32.231 Data Area 4 for Telemetry Log: Not Supported 00:10:32.231 Error Log Page Entries Supported: 1 00:10:32.231 Keep Alive: Not Supported 00:10:32.231 00:10:32.231 NVM Command Set Attributes 00:10:32.231 ========================== 00:10:32.231 Submission Queue Entry Size 00:10:32.231 Max: 64 00:10:32.231 Min: 64 00:10:32.231 Completion Queue Entry Size 00:10:32.231 Max: 16 00:10:32.231 Min: 16 00:10:32.231 Number of Namespaces: 256 00:10:32.231 Compare Command: Supported 00:10:32.231 Write Uncorrectable Command: Not Supported 00:10:32.231 Dataset Management Command: Supported 00:10:32.231 Write Zeroes Command: Supported 00:10:32.231 Set Features Save Field: Supported 00:10:32.231 Reservations: Not Supported 00:10:32.231 Timestamp: Supported 00:10:32.231 Copy: Supported 00:10:32.231 Volatile Write Cache: Present 00:10:32.231 Atomic Write Unit (Normal): 1 00:10:32.231 Atomic Write Unit (PFail): 1 00:10:32.231 Atomic Compare & Write Unit: 1 00:10:32.231 Fused Compare & Write: Not Supported 00:10:32.231 Scatter-Gather List 00:10:32.231 SGL Command Set: Supported 00:10:32.231 SGL Keyed: Not Supported 00:10:32.231 SGL Bit Bucket Descriptor: Not Supported 00:10:32.231 SGL Metadata Pointer: Not Supported 00:10:32.231 Oversized SGL: Not Supported 00:10:32.231 SGL Metadata Address: Not Supported 00:10:32.231 SGL Offset: Not Supported 00:10:32.231 Transport SGL Data Block: Not Supported 00:10:32.231 Replay Protected Memory Block: Not Supported 00:10:32.231 00:10:32.231 Firmware Slot Information 00:10:32.231 ========================= 00:10:32.231 Active slot: 1 00:10:32.231 Slot 1 Firmware Revision: 1.0 00:10:32.231 00:10:32.231 00:10:32.231 Commands Supported and Effects 00:10:32.231 ============================== 00:10:32.231 Admin Commands 00:10:32.231 -------------- 00:10:32.231 Delete I/O Submission Queue (00h): Supported 00:10:32.231 Create I/O Submission Queue (01h): Supported 00:10:32.231 Get Log Page (02h): Supported 00:10:32.232 Delete I/O Completion Queue (04h): Supported 00:10:32.232 Create I/O Completion Queue (05h): Supported 00:10:32.232 Identify (06h): Supported 00:10:32.232 Abort (08h): Supported 00:10:32.232 Set Features (09h): Supported 00:10:32.232 Get Features (0Ah): Supported 00:10:32.232 Asynchronous Event Request (0Ch): Supported 00:10:32.232 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:32.232 Directive Send (19h): Supported 00:10:32.232 Directive Receive (1Ah): Supported 00:10:32.232 Virtualization Management (1Ch): Supported 00:10:32.232 Doorbell Buffer Config (7Ch): Supported 00:10:32.232 Format NVM (80h): Supported LBA-Change 00:10:32.232 I/O Commands 00:10:32.232 ------------ 00:10:32.232 Flush (00h): Supported LBA-Change 00:10:32.232 Write (01h): Supported LBA-Change 00:10:32.232 Read (02h): Supported 00:10:32.232 Compare (05h): Supported 00:10:32.232 Write Zeroes (08h): Supported LBA-Change 00:10:32.232 Dataset Management (09h): Supported LBA-Change 00:10:32.232 Unknown (0Ch): Supported 00:10:32.232 Unknown (12h): Supported 00:10:32.232 Copy (19h): Supported LBA-Change 00:10:32.232 Unknown (1Dh): Supported LBA-Change 00:10:32.232 00:10:32.232 Error Log 00:10:32.232 ========= 00:10:32.232 00:10:32.232 Arbitration 00:10:32.232 =========== 00:10:32.232 Arbitration Burst: no limit 00:10:32.232 00:10:32.232 Power Management 00:10:32.232 ================ 00:10:32.232 Number of Power States: 1 00:10:32.232 Current Power State: Power State #0 00:10:32.232 Power State #0: 00:10:32.232 Max Power: 25.00 W 00:10:32.232 Non-Operational State: Operational 00:10:32.232 Entry Latency: 16 microseconds 00:10:32.232 Exit Latency: 4 microseconds 00:10:32.232 Relative Read Throughput: 0 00:10:32.232 Relative Read Latency: 0 00:10:32.232 Relative Write Throughput: 0 00:10:32.232 Relative Write Latency: 0 00:10:32.232 Idle Power: Not Reported 00:10:32.232 Active Power: Not Reported 00:10:32.232 Non-Operational Permissive Mode: Not Supported 00:10:32.232 00:10:32.232 Health Information 00:10:32.232 ================== 00:10:32.232 Critical Warnings: 00:10:32.232 Available Spare Space: OK 00:10:32.232 Temperature: OK 00:10:32.232 Device Reliability: OK 00:10:32.232 Read Only: No 00:10:32.232 Volatile Memory Backup: OK 00:10:32.232 Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.232 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:32.232 Available Spare: 0% 00:10:32.232 Available Spare Threshold: 0% 00:10:32.232 Life Percentage Used: 0% 00:10:32.232 Data Units Read: 3747 00:10:32.232 Data Units Written: 1731 00:10:32.232 Host Read Commands: 183914 00:10:32.232 Host Write Commands: 90444 00:10:32.232 Controller Busy Time: 0 minutes 00:10:32.232 Power Cycles: 0 00:10:32.232 Power On Hours: 0 hours 00:10:32.232 Unsafe Shutdowns: 0 00:10:32.232 Unrecoverable Media Errors: 0 00:10:32.232 Lifetime Error Log Entries: 0 00:10:32.232 Warning Temperature Time: 0 minutes 00:10:32.232 Critical Temperature Time: 0 minutes 00:10:32.232 00:10:32.232 Number of Queues 00:10:32.232 ================ 00:10:32.232 Number of I/O Submission Queues: 64 00:10:32.232 Number of I/O Completion Queues: 64 00:10:32.232 00:10:32.232 ZNS Specific Controller Data 00:10:32.232 ============================ 00:10:32.232 Zone Append Size Limit: 0 00:10:32.232 00:10:32.232 00:10:32.232 Active Namespaces 00:10:32.232 ================= 00:10:32.232 Namespace ID:1 00:10:32.232 Error Recovery Timeout: Unlimited 00:10:32.232 Command Set Identifier: NVM (00h) 00:10:32.232 Deallocate: Supported 00:10:32.232 Deallocated/Unwritten Error: Supported 00:10:32.232 Deallocated Read Value: All 0x00 00:10:32.232 Deallocate in Write Zeroes: Not Supported 00:10:32.232 Deallocated Guard Field: 0xFFFF 00:10:32.232 Flush: Supported 00:10:32.232 Reservation: Not Supported 00:10:32.232 Namespace Sharing Capabilities: Private 00:10:32.232 Size (in LBAs): 1048576 (4GiB) 00:10:32.232 Capacity (in LBAs): 1048576 (4GiB) 00:10:32.232 Utilization (in LBAs): 1048576 (4GiB) 00:10:32.232 Thin Provisioning: Not Supported 00:10:32.232 Per-NS Atomic Units: No 00:10:32.491 Maximum Single Source Range Length: 128 00:10:32.491 Maximum Copy Length: 128 00:10:32.491 Maximum Source Range Count: 128 00:10:32.491 NGUID/EUI64 Never Reused: No 00:10:32.491 Namespace Write Protected: No 00:10:32.491 Number of LBA Formats: 8 00:10:32.491 Current LBA Format: LBA Format #04 00:10:32.491 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:32.491 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:32.491 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:32.491 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:32.491 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:32.491 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:32.491 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:32.491 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:32.491 00:10:32.491 Namespace ID:2 00:10:32.491 Error Recovery Timeout: Unlimited 00:10:32.491 Command Set Identifier: NVM (00h) 00:10:32.491 Deallocate: Supported 00:10:32.491 Deallocated/Unwritten Error: Supported 00:10:32.491 Deallocated Read Value: All 0x00 00:10:32.491 Deallocate in Write Zeroes: Not Supported 00:10:32.491 Deallocated Guard Field: 0xFFFF 00:10:32.491 Flush: Supported 00:10:32.491 Reservation: Not Supported 00:10:32.491 Namespace Sharing Capabilities: Private 00:10:32.491 Size (in LBAs): 1048576 (4GiB) 00:10:32.491 Capacity (in LBAs): 1048576 (4GiB) 00:10:32.491 Utilization (in LBAs): 1048576 (4GiB) 00:10:32.491 Thin Provisioning: Not Supported 00:10:32.491 Per-NS Atomic Units: No 00:10:32.491 Maximum Single Source Range Length: 128 00:10:32.491 Maximum Copy Length: 128 00:10:32.491 Maximum Source Range Count: 128 00:10:32.491 NGUID/EUI64 Never Reused: No 00:10:32.491 Namespace Write Protected: No 00:10:32.491 Number of LBA Formats: 8 00:10:32.491 Current LBA Format: LBA Format #04 00:10:32.491 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:32.491 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:32.491 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:32.491 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:32.491 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:32.491 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:32.491 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:32.491 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:32.491 00:10:32.491 Namespace ID:3 00:10:32.491 Error Recovery Timeout: Unlimited 00:10:32.491 Command Set Identifier: NVM (00h) 00:10:32.491 Deallocate: Supported 00:10:32.491 Deallocated/Unwritten Error: Supported 00:10:32.491 Deallocated Read Value: All 0x00 00:10:32.491 Deallocate in Write Zeroes: Not Supported 00:10:32.491 Deallocated Guard Field: 0xFFFF 00:10:32.491 Flush: Supported 00:10:32.491 Reservation: Not Supported 00:10:32.491 Namespace Sharing Capabilities: Private 00:10:32.491 Size (in LBAs): 1048576 (4GiB) 00:10:32.491 Capacity (in LBAs): 1048576 (4GiB) 00:10:32.491 Utilization (in LBAs): 1048576 (4GiB) 00:10:32.491 Thin Provisioning: Not Supported 00:10:32.491 Per-NS Atomic Units: No 00:10:32.491 Maximum Single Source Range Length: 128 00:10:32.491 Maximum Copy Length: 128 00:10:32.491 Maximum Source Range Count: 128 00:10:32.491 NGUID/EUI64 Never Reused: No 00:10:32.491 Namespace Write Protected: No 00:10:32.491 Number of LBA Formats: 8 00:10:32.491 Current LBA Format: LBA Format #04 00:10:32.491 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:32.491 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:32.491 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:32.491 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:32.491 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:32.491 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:32.491 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:32.491 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:32.491 00:10:32.491 19:11:09 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:32.491 19:11:09 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:10:32.750 ===================================================== 00:10:32.750 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:32.750 ===================================================== 00:10:32.750 Controller Capabilities/Features 00:10:32.750 ================================ 00:10:32.750 Vendor ID: 1b36 00:10:32.750 Subsystem Vendor ID: 1af4 00:10:32.750 Serial Number: 12340 00:10:32.750 Model Number: QEMU NVMe Ctrl 00:10:32.750 Firmware Version: 8.0.0 00:10:32.750 Recommended Arb Burst: 6 00:10:32.750 IEEE OUI Identifier: 00 54 52 00:10:32.750 Multi-path I/O 00:10:32.750 May have multiple subsystem ports: No 00:10:32.750 May have multiple controllers: No 00:10:32.750 Associated with SR-IOV VF: No 00:10:32.750 Max Data Transfer Size: 524288 00:10:32.750 Max Number of Namespaces: 256 00:10:32.750 Max Number of I/O Queues: 64 00:10:32.750 NVMe Specification Version (VS): 1.4 00:10:32.750 NVMe Specification Version (Identify): 1.4 00:10:32.750 Maximum Queue Entries: 2048 00:10:32.750 Contiguous Queues Required: Yes 00:10:32.750 Arbitration Mechanisms Supported 00:10:32.750 Weighted Round Robin: Not Supported 00:10:32.750 Vendor Specific: Not Supported 00:10:32.750 Reset Timeout: 7500 ms 00:10:32.750 Doorbell Stride: 4 bytes 00:10:32.750 NVM Subsystem Reset: Not Supported 00:10:32.750 Command Sets Supported 00:10:32.750 NVM Command Set: Supported 00:10:32.750 Boot Partition: Not Supported 00:10:32.750 Memory Page Size Minimum: 4096 bytes 00:10:32.750 Memory Page Size Maximum: 65536 bytes 00:10:32.750 Persistent Memory Region: Not Supported 00:10:32.750 Optional Asynchronous Events Supported 00:10:32.750 Namespace Attribute Notices: Supported 00:10:32.750 Firmware Activation Notices: Not Supported 00:10:32.750 ANA Change Notices: Not Supported 00:10:32.750 PLE Aggregate Log Change Notices: Not Supported 00:10:32.750 LBA Status Info Alert Notices: Not Supported 00:10:32.750 EGE Aggregate Log Change Notices: Not Supported 00:10:32.750 Normal NVM Subsystem Shutdown event: Not Supported 00:10:32.750 Zone Descriptor Change Notices: Not Supported 00:10:32.750 Discovery Log Change Notices: Not Supported 00:10:32.750 Controller Attributes 00:10:32.750 128-bit Host Identifier: Not Supported 00:10:32.750 Non-Operational Permissive Mode: Not Supported 00:10:32.750 NVM Sets: Not Supported 00:10:32.750 Read Recovery Levels: Not Supported 00:10:32.750 Endurance Groups: Not Supported 00:10:32.750 Predictable Latency Mode: Not Supported 00:10:32.750 Traffic Based Keep ALive: Not Supported 00:10:32.750 Namespace Granularity: Not Supported 00:10:32.750 SQ Associations: Not Supported 00:10:32.750 UUID List: Not Supported 00:10:32.750 Multi-Domain Subsystem: Not Supported 00:10:32.750 Fixed Capacity Management: Not Supported 00:10:32.750 Variable Capacity Management: Not Supported 00:10:32.750 Delete Endurance Group: Not Supported 00:10:32.750 Delete NVM Set: Not Supported 00:10:32.750 Extended LBA Formats Supported: Supported 00:10:32.751 Flexible Data Placement Supported: Not Supported 00:10:32.751 00:10:32.751 Controller Memory Buffer Support 00:10:32.751 ================================ 00:10:32.751 Supported: No 00:10:32.751 00:10:32.751 Persistent Memory Region Support 00:10:32.751 ================================ 00:10:32.751 Supported: No 00:10:32.751 00:10:32.751 Admin Command Set Attributes 00:10:32.751 ============================ 00:10:32.751 Security Send/Receive: Not Supported 00:10:32.751 Format NVM: Supported 00:10:32.751 Firmware Activate/Download: Not Supported 00:10:32.751 Namespace Management: Supported 00:10:32.751 Device Self-Test: Not Supported 00:10:32.751 Directives: Supported 00:10:32.751 NVMe-MI: Not Supported 00:10:32.751 Virtualization Management: Not Supported 00:10:32.751 Doorbell Buffer Config: Supported 00:10:32.751 Get LBA Status Capability: Not Supported 00:10:32.751 Command & Feature Lockdown Capability: Not Supported 00:10:32.751 Abort Command Limit: 4 00:10:32.751 Async Event Request Limit: 4 00:10:32.751 Number of Firmware Slots: N/A 00:10:32.751 Firmware Slot 1 Read-Only: N/A 00:10:32.751 Firmware Activation Without Reset: N/A 00:10:32.751 Multiple Update Detection Support: N/A 00:10:32.751 Firmware Update Granularity: No Information Provided 00:10:32.751 Per-Namespace SMART Log: Yes 00:10:32.751 Asymmetric Namespace Access Log Page: Not Supported 00:10:32.751 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:32.751 Command Effects Log Page: Supported 00:10:32.751 Get Log Page Extended Data: Supported 00:10:32.751 Telemetry Log Pages: Not Supported 00:10:32.751 Persistent Event Log Pages: Not Supported 00:10:32.751 Supported Log Pages Log Page: May Support 00:10:32.751 Commands Supported & Effects Log Page: Not Supported 00:10:32.751 Feature Identifiers & Effects Log Page:May Support 00:10:32.751 NVMe-MI Commands & Effects Log Page: May Support 00:10:32.751 Data Area 4 for Telemetry Log: Not Supported 00:10:32.751 Error Log Page Entries Supported: 1 00:10:32.751 Keep Alive: Not Supported 00:10:32.751 00:10:32.751 NVM Command Set Attributes 00:10:32.751 ========================== 00:10:32.751 Submission Queue Entry Size 00:10:32.751 Max: 64 00:10:32.751 Min: 64 00:10:32.751 Completion Queue Entry Size 00:10:32.751 Max: 16 00:10:32.751 Min: 16 00:10:32.751 Number of Namespaces: 256 00:10:32.751 Compare Command: Supported 00:10:32.751 Write Uncorrectable Command: Not Supported 00:10:32.751 Dataset Management Command: Supported 00:10:32.751 Write Zeroes Command: Supported 00:10:32.751 Set Features Save Field: Supported 00:10:32.751 Reservations: Not Supported 00:10:32.751 Timestamp: Supported 00:10:32.751 Copy: Supported 00:10:32.751 Volatile Write Cache: Present 00:10:32.751 Atomic Write Unit (Normal): 1 00:10:32.751 Atomic Write Unit (PFail): 1 00:10:32.751 Atomic Compare & Write Unit: 1 00:10:32.751 Fused Compare & Write: Not Supported 00:10:32.751 Scatter-Gather List 00:10:32.751 SGL Command Set: Supported 00:10:32.751 SGL Keyed: Not Supported 00:10:32.751 SGL Bit Bucket Descriptor: Not Supported 00:10:32.751 SGL Metadata Pointer: Not Supported 00:10:32.751 Oversized SGL: Not Supported 00:10:32.751 SGL Metadata Address: Not Supported 00:10:32.751 SGL Offset: Not Supported 00:10:32.751 Transport SGL Data Block: Not Supported 00:10:32.751 Replay Protected Memory Block: Not Supported 00:10:32.751 00:10:32.751 Firmware Slot Information 00:10:32.751 ========================= 00:10:32.751 Active slot: 1 00:10:32.751 Slot 1 Firmware Revision: 1.0 00:10:32.751 00:10:32.751 00:10:32.751 Commands Supported and Effects 00:10:32.751 ============================== 00:10:32.751 Admin Commands 00:10:32.751 -------------- 00:10:32.751 Delete I/O Submission Queue (00h): Supported 00:10:32.751 Create I/O Submission Queue (01h): Supported 00:10:32.751 Get Log Page (02h): Supported 00:10:32.751 Delete I/O Completion Queue (04h): Supported 00:10:32.751 Create I/O Completion Queue (05h): Supported 00:10:32.751 Identify (06h): Supported 00:10:32.751 Abort (08h): Supported 00:10:32.751 Set Features (09h): Supported 00:10:32.751 Get Features (0Ah): Supported 00:10:32.751 Asynchronous Event Request (0Ch): Supported 00:10:32.751 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:32.751 Directive Send (19h): Supported 00:10:32.751 Directive Receive (1Ah): Supported 00:10:32.751 Virtualization Management (1Ch): Supported 00:10:32.751 Doorbell Buffer Config (7Ch): Supported 00:10:32.751 Format NVM (80h): Supported LBA-Change 00:10:32.751 I/O Commands 00:10:32.751 ------------ 00:10:32.751 Flush (00h): Supported LBA-Change 00:10:32.751 Write (01h): Supported LBA-Change 00:10:32.751 Read (02h): Supported 00:10:32.751 Compare (05h): Supported 00:10:32.751 Write Zeroes (08h): Supported LBA-Change 00:10:32.751 Dataset Management (09h): Supported LBA-Change 00:10:32.751 Unknown (0Ch): Supported 00:10:32.751 Unknown (12h): Supported 00:10:32.751 Copy (19h): Supported LBA-Change 00:10:32.751 Unknown (1Dh): Supported LBA-Change 00:10:32.751 00:10:32.751 Error Log 00:10:32.751 ========= 00:10:32.751 00:10:32.751 Arbitration 00:10:32.751 =========== 00:10:32.751 Arbitration Burst: no limit 00:10:32.751 00:10:32.751 Power Management 00:10:32.751 ================ 00:10:32.751 Number of Power States: 1 00:10:32.751 Current Power State: Power State #0 00:10:32.751 Power State #0: 00:10:32.751 Max Power: 25.00 W 00:10:32.751 Non-Operational State: Operational 00:10:32.751 Entry Latency: 16 microseconds 00:10:32.751 Exit Latency: 4 microseconds 00:10:32.751 Relative Read Throughput: 0 00:10:32.751 Relative Read Latency: 0 00:10:32.751 Relative Write Throughput: 0 00:10:32.751 Relative Write Latency: 0 00:10:32.751 Idle Power: Not Reported 00:10:32.751 Active Power: Not Reported 00:10:32.751 Non-Operational Permissive Mode: Not Supported 00:10:32.751 00:10:32.751 Health Information 00:10:32.751 ================== 00:10:32.751 Critical Warnings: 00:10:32.751 Available Spare Space: OK 00:10:32.751 Temperature: OK 00:10:32.751 Device Reliability: OK 00:10:32.751 Read Only: No 00:10:32.751 Volatile Memory Backup: OK 00:10:32.751 Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.751 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:32.751 Available Spare: 0% 00:10:32.751 Available Spare Threshold: 0% 00:10:32.751 Life Percentage Used: 0% 00:10:32.751 Data Units Read: 1788 00:10:32.751 Data Units Written: 823 00:10:32.751 Host Read Commands: 88241 00:10:32.751 Host Write Commands: 43816 00:10:32.751 Controller Busy Time: 0 minutes 00:10:32.751 Power Cycles: 0 00:10:32.751 Power On Hours: 0 hours 00:10:32.751 Unsafe Shutdowns: 0 00:10:32.751 Unrecoverable Media Errors: 0 00:10:32.751 Lifetime Error Log Entries: 0 00:10:32.751 Warning Temperature Time: 0 minutes 00:10:32.752 Critical Temperature Time: 0 minutes 00:10:32.752 00:10:32.752 Number of Queues 00:10:32.752 ================ 00:10:32.752 Number of I/O Submission Queues: 64 00:10:32.752 Number of I/O Completion Queues: 64 00:10:32.752 00:10:32.752 ZNS Specific Controller Data 00:10:32.752 ============================ 00:10:32.752 Zone Append Size Limit: 0 00:10:32.752 00:10:32.752 00:10:32.752 Active Namespaces 00:10:32.752 ================= 00:10:32.752 Namespace ID:1 00:10:32.752 Error Recovery Timeout: Unlimited 00:10:32.752 Command Set Identifier: NVM (00h) 00:10:32.752 Deallocate: Supported 00:10:32.752 Deallocated/Unwritten Error: Supported 00:10:32.752 Deallocated Read Value: All 0x00 00:10:32.752 Deallocate in Write Zeroes: Not Supported 00:10:32.752 Deallocated Guard Field: 0xFFFF 00:10:32.752 Flush: Supported 00:10:32.752 Reservation: Not Supported 00:10:32.752 Metadata Transferred as: Separate Metadata Buffer 00:10:32.752 Namespace Sharing Capabilities: Private 00:10:32.752 Size (in LBAs): 1548666 (5GiB) 00:10:32.752 Capacity (in LBAs): 1548666 (5GiB) 00:10:32.752 Utilization (in LBAs): 1548666 (5GiB) 00:10:32.752 Thin Provisioning: Not Supported 00:10:32.752 Per-NS Atomic Units: No 00:10:32.752 Maximum Single Source Range Length: 128 00:10:32.752 Maximum Copy Length: 128 00:10:32.752 Maximum Source Range Count: 128 00:10:32.752 NGUID/EUI64 Never Reused: No 00:10:32.752 Namespace Write Protected: No 00:10:32.752 Number of LBA Formats: 8 00:10:32.752 Current LBA Format: LBA Format #07 00:10:32.752 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:32.752 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:32.752 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:32.752 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:32.752 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:32.752 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:32.752 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:32.752 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:32.752 00:10:32.752 19:11:09 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:32.752 19:11:09 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:10:33.012 ===================================================== 00:10:33.012 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:33.012 ===================================================== 00:10:33.012 Controller Capabilities/Features 00:10:33.012 ================================ 00:10:33.012 Vendor ID: 1b36 00:10:33.012 Subsystem Vendor ID: 1af4 00:10:33.012 Serial Number: 12341 00:10:33.012 Model Number: QEMU NVMe Ctrl 00:10:33.012 Firmware Version: 8.0.0 00:10:33.012 Recommended Arb Burst: 6 00:10:33.012 IEEE OUI Identifier: 00 54 52 00:10:33.012 Multi-path I/O 00:10:33.012 May have multiple subsystem ports: No 00:10:33.012 May have multiple controllers: No 00:10:33.012 Associated with SR-IOV VF: No 00:10:33.012 Max Data Transfer Size: 524288 00:10:33.012 Max Number of Namespaces: 256 00:10:33.012 Max Number of I/O Queues: 64 00:10:33.012 NVMe Specification Version (VS): 1.4 00:10:33.012 NVMe Specification Version (Identify): 1.4 00:10:33.012 Maximum Queue Entries: 2048 00:10:33.012 Contiguous Queues Required: Yes 00:10:33.012 Arbitration Mechanisms Supported 00:10:33.012 Weighted Round Robin: Not Supported 00:10:33.012 Vendor Specific: Not Supported 00:10:33.012 Reset Timeout: 7500 ms 00:10:33.012 Doorbell Stride: 4 bytes 00:10:33.012 NVM Subsystem Reset: Not Supported 00:10:33.012 Command Sets Supported 00:10:33.012 NVM Command Set: Supported 00:10:33.012 Boot Partition: Not Supported 00:10:33.012 Memory Page Size Minimum: 4096 bytes 00:10:33.012 Memory Page Size Maximum: 65536 bytes 00:10:33.012 Persistent Memory Region: Not Supported 00:10:33.012 Optional Asynchronous Events Supported 00:10:33.012 Namespace Attribute Notices: Supported 00:10:33.012 Firmware Activation Notices: Not Supported 00:10:33.012 ANA Change Notices: Not Supported 00:10:33.012 PLE Aggregate Log Change Notices: Not Supported 00:10:33.012 LBA Status Info Alert Notices: Not Supported 00:10:33.012 EGE Aggregate Log Change Notices: Not Supported 00:10:33.012 Normal NVM Subsystem Shutdown event: Not Supported 00:10:33.012 Zone Descriptor Change Notices: Not Supported 00:10:33.012 Discovery Log Change Notices: Not Supported 00:10:33.012 Controller Attributes 00:10:33.012 128-bit Host Identifier: Not Supported 00:10:33.012 Non-Operational Permissive Mode: Not Supported 00:10:33.012 NVM Sets: Not Supported 00:10:33.012 Read Recovery Levels: Not Supported 00:10:33.012 Endurance Groups: Not Supported 00:10:33.012 Predictable Latency Mode: Not Supported 00:10:33.012 Traffic Based Keep ALive: Not Supported 00:10:33.012 Namespace Granularity: Not Supported 00:10:33.012 SQ Associations: Not Supported 00:10:33.012 UUID List: Not Supported 00:10:33.012 Multi-Domain Subsystem: Not Supported 00:10:33.012 Fixed Capacity Management: Not Supported 00:10:33.012 Variable Capacity Management: Not Supported 00:10:33.012 Delete Endurance Group: Not Supported 00:10:33.012 Delete NVM Set: Not Supported 00:10:33.012 Extended LBA Formats Supported: Supported 00:10:33.012 Flexible Data Placement Supported: Not Supported 00:10:33.012 00:10:33.012 Controller Memory Buffer Support 00:10:33.012 ================================ 00:10:33.012 Supported: No 00:10:33.012 00:10:33.012 Persistent Memory Region Support 00:10:33.012 ================================ 00:10:33.012 Supported: No 00:10:33.012 00:10:33.012 Admin Command Set Attributes 00:10:33.012 ============================ 00:10:33.012 Security Send/Receive: Not Supported 00:10:33.012 Format NVM: Supported 00:10:33.012 Firmware Activate/Download: Not Supported 00:10:33.012 Namespace Management: Supported 00:10:33.012 Device Self-Test: Not Supported 00:10:33.012 Directives: Supported 00:10:33.012 NVMe-MI: Not Supported 00:10:33.012 Virtualization Management: Not Supported 00:10:33.012 Doorbell Buffer Config: Supported 00:10:33.012 Get LBA Status Capability: Not Supported 00:10:33.012 Command & Feature Lockdown Capability: Not Supported 00:10:33.012 Abort Command Limit: 4 00:10:33.012 Async Event Request Limit: 4 00:10:33.012 Number of Firmware Slots: N/A 00:10:33.012 Firmware Slot 1 Read-Only: N/A 00:10:33.012 Firmware Activation Without Reset: N/A 00:10:33.012 Multiple Update Detection Support: N/A 00:10:33.012 Firmware Update Granularity: No Information Provided 00:10:33.012 Per-Namespace SMART Log: Yes 00:10:33.012 Asymmetric Namespace Access Log Page: Not Supported 00:10:33.012 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:33.012 Command Effects Log Page: Supported 00:10:33.012 Get Log Page Extended Data: Supported 00:10:33.012 Telemetry Log Pages: Not Supported 00:10:33.012 Persistent Event Log Pages: Not Supported 00:10:33.012 Supported Log Pages Log Page: May Support 00:10:33.012 Commands Supported & Effects Log Page: Not Supported 00:10:33.012 Feature Identifiers & Effects Log Page:May Support 00:10:33.012 NVMe-MI Commands & Effects Log Page: May Support 00:10:33.012 Data Area 4 for Telemetry Log: Not Supported 00:10:33.012 Error Log Page Entries Supported: 1 00:10:33.012 Keep Alive: Not Supported 00:10:33.012 00:10:33.012 NVM Command Set Attributes 00:10:33.012 ========================== 00:10:33.012 Submission Queue Entry Size 00:10:33.012 Max: 64 00:10:33.012 Min: 64 00:10:33.012 Completion Queue Entry Size 00:10:33.012 Max: 16 00:10:33.012 Min: 16 00:10:33.012 Number of Namespaces: 256 00:10:33.012 Compare Command: Supported 00:10:33.012 Write Uncorrectable Command: Not Supported 00:10:33.012 Dataset Management Command: Supported 00:10:33.012 Write Zeroes Command: Supported 00:10:33.012 Set Features Save Field: Supported 00:10:33.012 Reservations: Not Supported 00:10:33.012 Timestamp: Supported 00:10:33.012 Copy: Supported 00:10:33.012 Volatile Write Cache: Present 00:10:33.012 Atomic Write Unit (Normal): 1 00:10:33.012 Atomic Write Unit (PFail): 1 00:10:33.012 Atomic Compare & Write Unit: 1 00:10:33.012 Fused Compare & Write: Not Supported 00:10:33.012 Scatter-Gather List 00:10:33.012 SGL Command Set: Supported 00:10:33.012 SGL Keyed: Not Supported 00:10:33.012 SGL Bit Bucket Descriptor: Not Supported 00:10:33.012 SGL Metadata Pointer: Not Supported 00:10:33.012 Oversized SGL: Not Supported 00:10:33.012 SGL Metadata Address: Not Supported 00:10:33.012 SGL Offset: Not Supported 00:10:33.012 Transport SGL Data Block: Not Supported 00:10:33.012 Replay Protected Memory Block: Not Supported 00:10:33.012 00:10:33.012 Firmware Slot Information 00:10:33.012 ========================= 00:10:33.012 Active slot: 1 00:10:33.013 Slot 1 Firmware Revision: 1.0 00:10:33.013 00:10:33.013 00:10:33.013 Commands Supported and Effects 00:10:33.013 ============================== 00:10:33.013 Admin Commands 00:10:33.013 -------------- 00:10:33.013 Delete I/O Submission Queue (00h): Supported 00:10:33.013 Create I/O Submission Queue (01h): Supported 00:10:33.013 Get Log Page (02h): Supported 00:10:33.013 Delete I/O Completion Queue (04h): Supported 00:10:33.013 Create I/O Completion Queue (05h): Supported 00:10:33.013 Identify (06h): Supported 00:10:33.013 Abort (08h): Supported 00:10:33.013 Set Features (09h): Supported 00:10:33.013 Get Features (0Ah): Supported 00:10:33.013 Asynchronous Event Request (0Ch): Supported 00:10:33.013 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:33.013 Directive Send (19h): Supported 00:10:33.013 Directive Receive (1Ah): Supported 00:10:33.013 Virtualization Management (1Ch): Supported 00:10:33.013 Doorbell Buffer Config (7Ch): Supported 00:10:33.013 Format NVM (80h): Supported LBA-Change 00:10:33.013 I/O Commands 00:10:33.013 ------------ 00:10:33.013 Flush (00h): Supported LBA-Change 00:10:33.013 Write (01h): Supported LBA-Change 00:10:33.013 Read (02h): Supported 00:10:33.013 Compare (05h): Supported 00:10:33.013 Write Zeroes (08h): Supported LBA-Change 00:10:33.013 Dataset Management (09h): Supported LBA-Change 00:10:33.013 Unknown (0Ch): Supported 00:10:33.013 Unknown (12h): Supported 00:10:33.013 Copy (19h): Supported LBA-Change 00:10:33.013 Unknown (1Dh): Supported LBA-Change 00:10:33.013 00:10:33.013 Error Log 00:10:33.013 ========= 00:10:33.013 00:10:33.013 Arbitration 00:10:33.013 =========== 00:10:33.013 Arbitration Burst: no limit 00:10:33.013 00:10:33.013 Power Management 00:10:33.013 ================ 00:10:33.013 Number of Power States: 1 00:10:33.013 Current Power State: Power State #0 00:10:33.013 Power State #0: 00:10:33.013 Max Power: 25.00 W 00:10:33.013 Non-Operational State: Operational 00:10:33.013 Entry Latency: 16 microseconds 00:10:33.013 Exit Latency: 4 microseconds 00:10:33.013 Relative Read Throughput: 0 00:10:33.013 Relative Read Latency: 0 00:10:33.013 Relative Write Throughput: 0 00:10:33.013 Relative Write Latency: 0 00:10:33.013 Idle Power: Not Reported 00:10:33.013 Active Power: Not Reported 00:10:33.013 Non-Operational Permissive Mode: Not Supported 00:10:33.013 00:10:33.013 Health Information 00:10:33.013 ================== 00:10:33.013 Critical Warnings: 00:10:33.013 Available Spare Space: OK 00:10:33.013 Temperature: OK 00:10:33.013 Device Reliability: OK 00:10:33.013 Read Only: No 00:10:33.013 Volatile Memory Backup: OK 00:10:33.013 Current Temperature: 323 Kelvin (50 Celsius) 00:10:33.013 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:33.013 Available Spare: 0% 00:10:33.013 Available Spare Threshold: 0% 00:10:33.013 Life Percentage Used: 0% 00:10:33.013 Data Units Read: 1217 00:10:33.013 Data Units Written: 566 00:10:33.013 Host Read Commands: 60966 00:10:33.013 Host Write Commands: 30036 00:10:33.013 Controller Busy Time: 0 minutes 00:10:33.013 Power Cycles: 0 00:10:33.013 Power On Hours: 0 hours 00:10:33.013 Unsafe Shutdowns: 0 00:10:33.013 Unrecoverable Media Errors: 0 00:10:33.013 Lifetime Error Log Entries: 0 00:10:33.013 Warning Temperature Time: 0 minutes 00:10:33.013 Critical Temperature Time: 0 minutes 00:10:33.013 00:10:33.013 Number of Queues 00:10:33.013 ================ 00:10:33.013 Number of I/O Submission Queues: 64 00:10:33.013 Number of I/O Completion Queues: 64 00:10:33.013 00:10:33.013 ZNS Specific Controller Data 00:10:33.013 ============================ 00:10:33.013 Zone Append Size Limit: 0 00:10:33.013 00:10:33.013 00:10:33.013 Active Namespaces 00:10:33.013 ================= 00:10:33.013 Namespace ID:1 00:10:33.013 Error Recovery Timeout: Unlimited 00:10:33.013 Command Set Identifier: NVM (00h) 00:10:33.013 Deallocate: Supported 00:10:33.013 Deallocated/Unwritten Error: Supported 00:10:33.013 Deallocated Read Value: All 0x00 00:10:33.013 Deallocate in Write Zeroes: Not Supported 00:10:33.013 Deallocated Guard Field: 0xFFFF 00:10:33.013 Flush: Supported 00:10:33.013 Reservation: Not Supported 00:10:33.013 Namespace Sharing Capabilities: Private 00:10:33.013 Size (in LBAs): 1310720 (5GiB) 00:10:33.013 Capacity (in LBAs): 1310720 (5GiB) 00:10:33.013 Utilization (in LBAs): 1310720 (5GiB) 00:10:33.013 Thin Provisioning: Not Supported 00:10:33.013 Per-NS Atomic Units: No 00:10:33.013 Maximum Single Source Range Length: 128 00:10:33.013 Maximum Copy Length: 128 00:10:33.013 Maximum Source Range Count: 128 00:10:33.013 NGUID/EUI64 Never Reused: No 00:10:33.013 Namespace Write Protected: No 00:10:33.013 Number of LBA Formats: 8 00:10:33.013 Current LBA Format: LBA Format #04 00:10:33.013 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:33.013 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:33.013 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:33.013 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:33.013 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:33.013 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:33.013 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:33.013 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:33.013 00:10:33.013 19:11:10 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:33.013 19:11:10 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:10:33.273 ===================================================== 00:10:33.273 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:33.273 ===================================================== 00:10:33.273 Controller Capabilities/Features 00:10:33.273 ================================ 00:10:33.273 Vendor ID: 1b36 00:10:33.273 Subsystem Vendor ID: 1af4 00:10:33.273 Serial Number: 12342 00:10:33.273 Model Number: QEMU NVMe Ctrl 00:10:33.273 Firmware Version: 8.0.0 00:10:33.273 Recommended Arb Burst: 6 00:10:33.273 IEEE OUI Identifier: 00 54 52 00:10:33.273 Multi-path I/O 00:10:33.273 May have multiple subsystem ports: No 00:10:33.273 May have multiple controllers: No 00:10:33.273 Associated with SR-IOV VF: No 00:10:33.273 Max Data Transfer Size: 524288 00:10:33.273 Max Number of Namespaces: 256 00:10:33.273 Max Number of I/O Queues: 64 00:10:33.273 NVMe Specification Version (VS): 1.4 00:10:33.273 NVMe Specification Version (Identify): 1.4 00:10:33.273 Maximum Queue Entries: 2048 00:10:33.273 Contiguous Queues Required: Yes 00:10:33.273 Arbitration Mechanisms Supported 00:10:33.273 Weighted Round Robin: Not Supported 00:10:33.273 Vendor Specific: Not Supported 00:10:33.273 Reset Timeout: 7500 ms 00:10:33.273 Doorbell Stride: 4 bytes 00:10:33.273 NVM Subsystem Reset: Not Supported 00:10:33.273 Command Sets Supported 00:10:33.273 NVM Command Set: Supported 00:10:33.273 Boot Partition: Not Supported 00:10:33.273 Memory Page Size Minimum: 4096 bytes 00:10:33.273 Memory Page Size Maximum: 65536 bytes 00:10:33.273 Persistent Memory Region: Not Supported 00:10:33.273 Optional Asynchronous Events Supported 00:10:33.273 Namespace Attribute Notices: Supported 00:10:33.273 Firmware Activation Notices: Not Supported 00:10:33.273 ANA Change Notices: Not Supported 00:10:33.273 PLE Aggregate Log Change Notices: Not Supported 00:10:33.273 LBA Status Info Alert Notices: Not Supported 00:10:33.273 EGE Aggregate Log Change Notices: Not Supported 00:10:33.273 Normal NVM Subsystem Shutdown event: Not Supported 00:10:33.273 Zone Descriptor Change Notices: Not Supported 00:10:33.273 Discovery Log Change Notices: Not Supported 00:10:33.273 Controller Attributes 00:10:33.273 128-bit Host Identifier: Not Supported 00:10:33.273 Non-Operational Permissive Mode: Not Supported 00:10:33.273 NVM Sets: Not Supported 00:10:33.273 Read Recovery Levels: Not Supported 00:10:33.273 Endurance Groups: Not Supported 00:10:33.273 Predictable Latency Mode: Not Supported 00:10:33.273 Traffic Based Keep ALive: Not Supported 00:10:33.273 Namespace Granularity: Not Supported 00:10:33.273 SQ Associations: Not Supported 00:10:33.273 UUID List: Not Supported 00:10:33.273 Multi-Domain Subsystem: Not Supported 00:10:33.273 Fixed Capacity Management: Not Supported 00:10:33.273 Variable Capacity Management: Not Supported 00:10:33.273 Delete Endurance Group: Not Supported 00:10:33.274 Delete NVM Set: Not Supported 00:10:33.274 Extended LBA Formats Supported: Supported 00:10:33.274 Flexible Data Placement Supported: Not Supported 00:10:33.274 00:10:33.274 Controller Memory Buffer Support 00:10:33.274 ================================ 00:10:33.274 Supported: No 00:10:33.274 00:10:33.274 Persistent Memory Region Support 00:10:33.274 ================================ 00:10:33.274 Supported: No 00:10:33.274 00:10:33.274 Admin Command Set Attributes 00:10:33.274 ============================ 00:10:33.274 Security Send/Receive: Not Supported 00:10:33.274 Format NVM: Supported 00:10:33.274 Firmware Activate/Download: Not Supported 00:10:33.274 Namespace Management: Supported 00:10:33.274 Device Self-Test: Not Supported 00:10:33.274 Directives: Supported 00:10:33.274 NVMe-MI: Not Supported 00:10:33.274 Virtualization Management: Not Supported 00:10:33.274 Doorbell Buffer Config: Supported 00:10:33.274 Get LBA Status Capability: Not Supported 00:10:33.274 Command & Feature Lockdown Capability: Not Supported 00:10:33.274 Abort Command Limit: 4 00:10:33.274 Async Event Request Limit: 4 00:10:33.274 Number of Firmware Slots: N/A 00:10:33.274 Firmware Slot 1 Read-Only: N/A 00:10:33.274 Firmware Activation Without Reset: N/A 00:10:33.274 Multiple Update Detection Support: N/A 00:10:33.274 Firmware Update Granularity: No Information Provided 00:10:33.274 Per-Namespace SMART Log: Yes 00:10:33.274 Asymmetric Namespace Access Log Page: Not Supported 00:10:33.274 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:33.274 Command Effects Log Page: Supported 00:10:33.274 Get Log Page Extended Data: Supported 00:10:33.274 Telemetry Log Pages: Not Supported 00:10:33.274 Persistent Event Log Pages: Not Supported 00:10:33.274 Supported Log Pages Log Page: May Support 00:10:33.274 Commands Supported & Effects Log Page: Not Supported 00:10:33.274 Feature Identifiers & Effects Log Page:May Support 00:10:33.274 NVMe-MI Commands & Effects Log Page: May Support 00:10:33.274 Data Area 4 for Telemetry Log: Not Supported 00:10:33.274 Error Log Page Entries Supported: 1 00:10:33.274 Keep Alive: Not Supported 00:10:33.274 00:10:33.274 NVM Command Set Attributes 00:10:33.274 ========================== 00:10:33.274 Submission Queue Entry Size 00:10:33.274 Max: 64 00:10:33.274 Min: 64 00:10:33.274 Completion Queue Entry Size 00:10:33.274 Max: 16 00:10:33.274 Min: 16 00:10:33.274 Number of Namespaces: 256 00:10:33.274 Compare Command: Supported 00:10:33.274 Write Uncorrectable Command: Not Supported 00:10:33.274 Dataset Management Command: Supported 00:10:33.274 Write Zeroes Command: Supported 00:10:33.274 Set Features Save Field: Supported 00:10:33.274 Reservations: Not Supported 00:10:33.274 Timestamp: Supported 00:10:33.274 Copy: Supported 00:10:33.274 Volatile Write Cache: Present 00:10:33.274 Atomic Write Unit (Normal): 1 00:10:33.274 Atomic Write Unit (PFail): 1 00:10:33.274 Atomic Compare & Write Unit: 1 00:10:33.274 Fused Compare & Write: Not Supported 00:10:33.274 Scatter-Gather List 00:10:33.274 SGL Command Set: Supported 00:10:33.274 SGL Keyed: Not Supported 00:10:33.274 SGL Bit Bucket Descriptor: Not Supported 00:10:33.274 SGL Metadata Pointer: Not Supported 00:10:33.274 Oversized SGL: Not Supported 00:10:33.274 SGL Metadata Address: Not Supported 00:10:33.274 SGL Offset: Not Supported 00:10:33.274 Transport SGL Data Block: Not Supported 00:10:33.274 Replay Protected Memory Block: Not Supported 00:10:33.274 00:10:33.274 Firmware Slot Information 00:10:33.274 ========================= 00:10:33.274 Active slot: 1 00:10:33.274 Slot 1 Firmware Revision: 1.0 00:10:33.274 00:10:33.274 00:10:33.274 Commands Supported and Effects 00:10:33.274 ============================== 00:10:33.274 Admin Commands 00:10:33.274 -------------- 00:10:33.274 Delete I/O Submission Queue (00h): Supported 00:10:33.274 Create I/O Submission Queue (01h): Supported 00:10:33.274 Get Log Page (02h): Supported 00:10:33.274 Delete I/O Completion Queue (04h): Supported 00:10:33.274 Create I/O Completion Queue (05h): Supported 00:10:33.274 Identify (06h): Supported 00:10:33.274 Abort (08h): Supported 00:10:33.274 Set Features (09h): Supported 00:10:33.274 Get Features (0Ah): Supported 00:10:33.274 Asynchronous Event Request (0Ch): Supported 00:10:33.274 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:33.274 Directive Send (19h): Supported 00:10:33.274 Directive Receive (1Ah): Supported 00:10:33.274 Virtualization Management (1Ch): Supported 00:10:33.274 Doorbell Buffer Config (7Ch): Supported 00:10:33.274 Format NVM (80h): Supported LBA-Change 00:10:33.274 I/O Commands 00:10:33.274 ------------ 00:10:33.274 Flush (00h): Supported LBA-Change 00:10:33.274 Write (01h): Supported LBA-Change 00:10:33.274 Read (02h): Supported 00:10:33.274 Compare (05h): Supported 00:10:33.274 Write Zeroes (08h): Supported LBA-Change 00:10:33.274 Dataset Management (09h): Supported LBA-Change 00:10:33.274 Unknown (0Ch): Supported 00:10:33.274 Unknown (12h): Supported 00:10:33.274 Copy (19h): Supported LBA-Change 00:10:33.274 Unknown (1Dh): Supported LBA-Change 00:10:33.274 00:10:33.274 Error Log 00:10:33.274 ========= 00:10:33.274 00:10:33.274 Arbitration 00:10:33.274 =========== 00:10:33.274 Arbitration Burst: no limit 00:10:33.274 00:10:33.274 Power Management 00:10:33.274 ================ 00:10:33.274 Number of Power States: 1 00:10:33.274 Current Power State: Power State #0 00:10:33.274 Power State #0: 00:10:33.274 Max Power: 25.00 W 00:10:33.274 Non-Operational State: Operational 00:10:33.274 Entry Latency: 16 microseconds 00:10:33.274 Exit Latency: 4 microseconds 00:10:33.274 Relative Read Throughput: 0 00:10:33.274 Relative Read Latency: 0 00:10:33.274 Relative Write Throughput: 0 00:10:33.274 Relative Write Latency: 0 00:10:33.274 Idle Power: Not Reported 00:10:33.274 Active Power: Not Reported 00:10:33.274 Non-Operational Permissive Mode: Not Supported 00:10:33.274 00:10:33.274 Health Information 00:10:33.274 ================== 00:10:33.274 Critical Warnings: 00:10:33.274 Available Spare Space: OK 00:10:33.274 Temperature: OK 00:10:33.274 Device Reliability: OK 00:10:33.274 Read Only: No 00:10:33.274 Volatile Memory Backup: OK 00:10:33.274 Current Temperature: 323 Kelvin (50 Celsius) 00:10:33.274 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:33.274 Available Spare: 0% 00:10:33.274 Available Spare Threshold: 0% 00:10:33.274 Life Percentage Used: 0% 00:10:33.274 Data Units Read: 3747 00:10:33.274 Data Units Written: 1731 00:10:33.274 Host Read Commands: 183914 00:10:33.274 Host Write Commands: 90444 00:10:33.274 Controller Busy Time: 0 minutes 00:10:33.274 Power Cycles: 0 00:10:33.274 Power On Hours: 0 hours 00:10:33.274 Unsafe Shutdowns: 0 00:10:33.274 Unrecoverable Media Errors: 0 00:10:33.274 Lifetime Error Log Entries: 0 00:10:33.274 Warning Temperature Time: 0 minutes 00:10:33.274 Critical Temperature Time: 0 minutes 00:10:33.274 00:10:33.274 Number of Queues 00:10:33.274 ================ 00:10:33.274 Number of I/O Submission Queues: 64 00:10:33.274 Number of I/O Completion Queues: 64 00:10:33.274 00:10:33.274 ZNS Specific Controller Data 00:10:33.274 ============================ 00:10:33.274 Zone Append Size Limit: 0 00:10:33.274 00:10:33.274 00:10:33.274 Active Namespaces 00:10:33.274 ================= 00:10:33.274 Namespace ID:1 00:10:33.274 Error Recovery Timeout: Unlimited 00:10:33.274 Command Set Identifier: NVM (00h) 00:10:33.274 Deallocate: Supported 00:10:33.274 Deallocated/Unwritten Error: Supported 00:10:33.274 Deallocated Read Value: All 0x00 00:10:33.274 Deallocate in Write Zeroes: Not Supported 00:10:33.274 Deallocated Guard Field: 0xFFFF 00:10:33.274 Flush: Supported 00:10:33.274 Reservation: Not Supported 00:10:33.274 Namespace Sharing Capabilities: Private 00:10:33.274 Size (in LBAs): 1048576 (4GiB) 00:10:33.274 Capacity (in LBAs): 1048576 (4GiB) 00:10:33.274 Utilization (in LBAs): 1048576 (4GiB) 00:10:33.274 Thin Provisioning: Not Supported 00:10:33.274 Per-NS Atomic Units: No 00:10:33.274 Maximum Single Source Range Length: 128 00:10:33.274 Maximum Copy Length: 128 00:10:33.274 Maximum Source Range Count: 128 00:10:33.274 NGUID/EUI64 Never Reused: No 00:10:33.274 Namespace Write Protected: No 00:10:33.274 Number of LBA Formats: 8 00:10:33.274 Current LBA Format: LBA Format #04 00:10:33.275 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:33.275 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:33.275 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:33.275 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:33.275 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:33.275 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:33.275 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:33.275 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:33.275 00:10:33.275 Namespace ID:2 00:10:33.275 Error Recovery Timeout: Unlimited 00:10:33.275 Command Set Identifier: NVM (00h) 00:10:33.275 Deallocate: Supported 00:10:33.275 Deallocated/Unwritten Error: Supported 00:10:33.275 Deallocated Read Value: All 0x00 00:10:33.275 Deallocate in Write Zeroes: Not Supported 00:10:33.275 Deallocated Guard Field: 0xFFFF 00:10:33.275 Flush: Supported 00:10:33.275 Reservation: Not Supported 00:10:33.275 Namespace Sharing Capabilities: Private 00:10:33.275 Size (in LBAs): 1048576 (4GiB) 00:10:33.275 Capacity (in LBAs): 1048576 (4GiB) 00:10:33.275 Utilization (in LBAs): 1048576 (4GiB) 00:10:33.275 Thin Provisioning: Not Supported 00:10:33.275 Per-NS Atomic Units: No 00:10:33.275 Maximum Single Source Range Length: 128 00:10:33.275 Maximum Copy Length: 128 00:10:33.275 Maximum Source Range Count: 128 00:10:33.275 NGUID/EUI64 Never Reused: No 00:10:33.275 Namespace Write Protected: No 00:10:33.275 Number of LBA Formats: 8 00:10:33.275 Current LBA Format: LBA Format #04 00:10:33.275 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:33.275 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:33.275 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:33.275 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:33.275 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:33.275 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:33.275 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:33.275 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:33.275 00:10:33.275 Namespace ID:3 00:10:33.275 Error Recovery Timeout: Unlimited 00:10:33.275 Command Set Identifier: NVM (00h) 00:10:33.275 Deallocate: Supported 00:10:33.275 Deallocated/Unwritten Error: Supported 00:10:33.275 Deallocated Read Value: All 0x00 00:10:33.275 Deallocate in Write Zeroes: Not Supported 00:10:33.275 Deallocated Guard Field: 0xFFFF 00:10:33.275 Flush: Supported 00:10:33.275 Reservation: Not Supported 00:10:33.275 Namespace Sharing Capabilities: Private 00:10:33.275 Size (in LBAs): 1048576 (4GiB) 00:10:33.275 Capacity (in LBAs): 1048576 (4GiB) 00:10:33.275 Utilization (in LBAs): 1048576 (4GiB) 00:10:33.275 Thin Provisioning: Not Supported 00:10:33.275 Per-NS Atomic Units: No 00:10:33.275 Maximum Single Source Range Length: 128 00:10:33.275 Maximum Copy Length: 128 00:10:33.275 Maximum Source Range Count: 128 00:10:33.275 NGUID/EUI64 Never Reused: No 00:10:33.275 Namespace Write Protected: No 00:10:33.275 Number of LBA Formats: 8 00:10:33.275 Current LBA Format: LBA Format #04 00:10:33.275 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:33.275 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:33.275 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:33.275 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:33.275 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:33.275 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:33.275 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:33.275 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:33.275 00:10:33.275 19:11:10 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:33.275 19:11:10 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:10:33.535 ===================================================== 00:10:33.535 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:33.535 ===================================================== 00:10:33.535 Controller Capabilities/Features 00:10:33.535 ================================ 00:10:33.535 Vendor ID: 1b36 00:10:33.535 Subsystem Vendor ID: 1af4 00:10:33.535 Serial Number: 12343 00:10:33.535 Model Number: QEMU NVMe Ctrl 00:10:33.535 Firmware Version: 8.0.0 00:10:33.535 Recommended Arb Burst: 6 00:10:33.535 IEEE OUI Identifier: 00 54 52 00:10:33.535 Multi-path I/O 00:10:33.535 May have multiple subsystem ports: No 00:10:33.535 May have multiple controllers: Yes 00:10:33.535 Associated with SR-IOV VF: No 00:10:33.535 Max Data Transfer Size: 524288 00:10:33.535 Max Number of Namespaces: 256 00:10:33.535 Max Number of I/O Queues: 64 00:10:33.535 NVMe Specification Version (VS): 1.4 00:10:33.535 NVMe Specification Version (Identify): 1.4 00:10:33.535 Maximum Queue Entries: 2048 00:10:33.535 Contiguous Queues Required: Yes 00:10:33.535 Arbitration Mechanisms Supported 00:10:33.535 Weighted Round Robin: Not Supported 00:10:33.535 Vendor Specific: Not Supported 00:10:33.535 Reset Timeout: 7500 ms 00:10:33.535 Doorbell Stride: 4 bytes 00:10:33.535 NVM Subsystem Reset: Not Supported 00:10:33.535 Command Sets Supported 00:10:33.535 NVM Command Set: Supported 00:10:33.535 Boot Partition: Not Supported 00:10:33.535 Memory Page Size Minimum: 4096 bytes 00:10:33.535 Memory Page Size Maximum: 65536 bytes 00:10:33.535 Persistent Memory Region: Not Supported 00:10:33.535 Optional Asynchronous Events Supported 00:10:33.535 Namespace Attribute Notices: Supported 00:10:33.535 Firmware Activation Notices: Not Supported 00:10:33.535 ANA Change Notices: Not Supported 00:10:33.535 PLE Aggregate Log Change Notices: Not Supported 00:10:33.535 LBA Status Info Alert Notices: Not Supported 00:10:33.535 EGE Aggregate Log Change Notices: Not Supported 00:10:33.535 Normal NVM Subsystem Shutdown event: Not Supported 00:10:33.535 Zone Descriptor Change Notices: Not Supported 00:10:33.535 Discovery Log Change Notices: Not Supported 00:10:33.535 Controller Attributes 00:10:33.535 128-bit Host Identifier: Not Supported 00:10:33.535 Non-Operational Permissive Mode: Not Supported 00:10:33.535 NVM Sets: Not Supported 00:10:33.535 Read Recovery Levels: Not Supported 00:10:33.535 Endurance Groups: Supported 00:10:33.535 Predictable Latency Mode: Not Supported 00:10:33.535 Traffic Based Keep ALive: Not Supported 00:10:33.535 Namespace Granularity: Not Supported 00:10:33.535 SQ Associations: Not Supported 00:10:33.535 UUID List: Not Supported 00:10:33.535 Multi-Domain Subsystem: Not Supported 00:10:33.535 Fixed Capacity Management: Not Supported 00:10:33.535 Variable Capacity Management: Not Supported 00:10:33.535 Delete Endurance Group: Not Supported 00:10:33.535 Delete NVM Set: Not Supported 00:10:33.535 Extended LBA Formats Supported: Supported 00:10:33.535 Flexible Data Placement Supported: Supported 00:10:33.535 00:10:33.535 Controller Memory Buffer Support 00:10:33.535 ================================ 00:10:33.535 Supported: No 00:10:33.535 00:10:33.535 Persistent Memory Region Support 00:10:33.535 ================================ 00:10:33.535 Supported: No 00:10:33.535 00:10:33.535 Admin Command Set Attributes 00:10:33.535 ============================ 00:10:33.535 Security Send/Receive: Not Supported 00:10:33.535 Format NVM: Supported 00:10:33.535 Firmware Activate/Download: Not Supported 00:10:33.535 Namespace Management: Supported 00:10:33.535 Device Self-Test: Not Supported 00:10:33.535 Directives: Supported 00:10:33.535 NVMe-MI: Not Supported 00:10:33.535 Virtualization Management: Not Supported 00:10:33.535 Doorbell Buffer Config: Supported 00:10:33.535 Get LBA Status Capability: Not Supported 00:10:33.535 Command & Feature Lockdown Capability: Not Supported 00:10:33.535 Abort Command Limit: 4 00:10:33.535 Async Event Request Limit: 4 00:10:33.535 Number of Firmware Slots: N/A 00:10:33.535 Firmware Slot 1 Read-Only: N/A 00:10:33.535 Firmware Activation Without Reset: N/A 00:10:33.535 Multiple Update Detection Support: N/A 00:10:33.535 Firmware Update Granularity: No Information Provided 00:10:33.535 Per-Namespace SMART Log: Yes 00:10:33.535 Asymmetric Namespace Access Log Page: Not Supported 00:10:33.535 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:33.535 Command Effects Log Page: Supported 00:10:33.535 Get Log Page Extended Data: Supported 00:10:33.535 Telemetry Log Pages: Not Supported 00:10:33.535 Persistent Event Log Pages: Not Supported 00:10:33.535 Supported Log Pages Log Page: May Support 00:10:33.535 Commands Supported & Effects Log Page: Not Supported 00:10:33.535 Feature Identifiers & Effects Log Page:May Support 00:10:33.535 NVMe-MI Commands & Effects Log Page: May Support 00:10:33.535 Data Area 4 for Telemetry Log: Not Supported 00:10:33.535 Error Log Page Entries Supported: 1 00:10:33.535 Keep Alive: Not Supported 00:10:33.535 00:10:33.535 NVM Command Set Attributes 00:10:33.535 ========================== 00:10:33.535 Submission Queue Entry Size 00:10:33.535 Max: 64 00:10:33.536 Min: 64 00:10:33.536 Completion Queue Entry Size 00:10:33.536 Max: 16 00:10:33.536 Min: 16 00:10:33.536 Number of Namespaces: 256 00:10:33.536 Compare Command: Supported 00:10:33.536 Write Uncorrectable Command: Not Supported 00:10:33.536 Dataset Management Command: Supported 00:10:33.536 Write Zeroes Command: Supported 00:10:33.536 Set Features Save Field: Supported 00:10:33.536 Reservations: Not Supported 00:10:33.536 Timestamp: Supported 00:10:33.536 Copy: Supported 00:10:33.536 Volatile Write Cache: Present 00:10:33.536 Atomic Write Unit (Normal): 1 00:10:33.536 Atomic Write Unit (PFail): 1 00:10:33.536 Atomic Compare & Write Unit: 1 00:10:33.536 Fused Compare & Write: Not Supported 00:10:33.536 Scatter-Gather List 00:10:33.536 SGL Command Set: Supported 00:10:33.536 SGL Keyed: Not Supported 00:10:33.536 SGL Bit Bucket Descriptor: Not Supported 00:10:33.536 SGL Metadata Pointer: Not Supported 00:10:33.536 Oversized SGL: Not Supported 00:10:33.536 SGL Metadata Address: Not Supported 00:10:33.536 SGL Offset: Not Supported 00:10:33.536 Transport SGL Data Block: Not Supported 00:10:33.536 Replay Protected Memory Block: Not Supported 00:10:33.536 00:10:33.536 Firmware Slot Information 00:10:33.536 ========================= 00:10:33.536 Active slot: 1 00:10:33.536 Slot 1 Firmware Revision: 1.0 00:10:33.536 00:10:33.536 00:10:33.536 Commands Supported and Effects 00:10:33.536 ============================== 00:10:33.536 Admin Commands 00:10:33.536 -------------- 00:10:33.536 Delete I/O Submission Queue (00h): Supported 00:10:33.536 Create I/O Submission Queue (01h): Supported 00:10:33.536 Get Log Page (02h): Supported 00:10:33.536 Delete I/O Completion Queue (04h): Supported 00:10:33.536 Create I/O Completion Queue (05h): Supported 00:10:33.536 Identify (06h): Supported 00:10:33.536 Abort (08h): Supported 00:10:33.536 Set Features (09h): Supported 00:10:33.536 Get Features (0Ah): Supported 00:10:33.536 Asynchronous Event Request (0Ch): Supported 00:10:33.536 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:33.536 Directive Send (19h): Supported 00:10:33.536 Directive Receive (1Ah): Supported 00:10:33.536 Virtualization Management (1Ch): Supported 00:10:33.536 Doorbell Buffer Config (7Ch): Supported 00:10:33.536 Format NVM (80h): Supported LBA-Change 00:10:33.536 I/O Commands 00:10:33.536 ------------ 00:10:33.536 Flush (00h): Supported LBA-Change 00:10:33.536 Write (01h): Supported LBA-Change 00:10:33.536 Read (02h): Supported 00:10:33.536 Compare (05h): Supported 00:10:33.536 Write Zeroes (08h): Supported LBA-Change 00:10:33.536 Dataset Management (09h): Supported LBA-Change 00:10:33.536 Unknown (0Ch): Supported 00:10:33.536 Unknown (12h): Supported 00:10:33.536 Copy (19h): Supported LBA-Change 00:10:33.536 Unknown (1Dh): Supported LBA-Change 00:10:33.536 00:10:33.536 Error Log 00:10:33.536 ========= 00:10:33.536 00:10:33.536 Arbitration 00:10:33.536 =========== 00:10:33.536 Arbitration Burst: no limit 00:10:33.536 00:10:33.536 Power Management 00:10:33.536 ================ 00:10:33.536 Number of Power States: 1 00:10:33.536 Current Power State: Power State #0 00:10:33.536 Power State #0: 00:10:33.536 Max Power: 25.00 W 00:10:33.536 Non-Operational State: Operational 00:10:33.536 Entry Latency: 16 microseconds 00:10:33.536 Exit Latency: 4 microseconds 00:10:33.536 Relative Read Throughput: 0 00:10:33.536 Relative Read Latency: 0 00:10:33.536 Relative Write Throughput: 0 00:10:33.536 Relative Write Latency: 0 00:10:33.536 Idle Power: Not Reported 00:10:33.536 Active Power: Not Reported 00:10:33.536 Non-Operational Permissive Mode: Not Supported 00:10:33.536 00:10:33.536 Health Information 00:10:33.536 ================== 00:10:33.536 Critical Warnings: 00:10:33.536 Available Spare Space: OK 00:10:33.536 Temperature: OK 00:10:33.536 Device Reliability: OK 00:10:33.536 Read Only: No 00:10:33.536 Volatile Memory Backup: OK 00:10:33.536 Current Temperature: 323 Kelvin (50 Celsius) 00:10:33.536 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:33.536 Available Spare: 0% 00:10:33.536 Available Spare Threshold: 0% 00:10:33.536 Life Percentage Used: 0% 00:10:33.536 Data Units Read: 1275 00:10:33.536 Data Units Written: 606 00:10:33.536 Host Read Commands: 60934 00:10:33.536 Host Write Commands: 30384 00:10:33.536 Controller Busy Time: 0 minutes 00:10:33.536 Power Cycles: 0 00:10:33.536 Power On Hours: 0 hours 00:10:33.536 Unsafe Shutdowns: 0 00:10:33.536 Unrecoverable Media Errors: 0 00:10:33.536 Lifetime Error Log Entries: 0 00:10:33.536 Warning Temperature Time: 0 minutes 00:10:33.536 Critical Temperature Time: 0 minutes 00:10:33.536 00:10:33.536 Number of Queues 00:10:33.536 ================ 00:10:33.536 Number of I/O Submission Queues: 64 00:10:33.536 Number of I/O Completion Queues: 64 00:10:33.536 00:10:33.536 ZNS Specific Controller Data 00:10:33.536 ============================ 00:10:33.536 Zone Append Size Limit: 0 00:10:33.536 00:10:33.536 00:10:33.536 Active Namespaces 00:10:33.536 ================= 00:10:33.536 Namespace ID:1 00:10:33.536 Error Recovery Timeout: Unlimited 00:10:33.536 Command Set Identifier: NVM (00h) 00:10:33.536 Deallocate: Supported 00:10:33.536 Deallocated/Unwritten Error: Supported 00:10:33.536 Deallocated Read Value: All 0x00 00:10:33.536 Deallocate in Write Zeroes: Not Supported 00:10:33.536 Deallocated Guard Field: 0xFFFF 00:10:33.536 Flush: Supported 00:10:33.536 Reservation: Not Supported 00:10:33.536 Namespace Sharing Capabilities: Multiple Controllers 00:10:33.536 Size (in LBAs): 262144 (1GiB) 00:10:33.536 Capacity (in LBAs): 262144 (1GiB) 00:10:33.536 Utilization (in LBAs): 262144 (1GiB) 00:10:33.536 Thin Provisioning: Not Supported 00:10:33.536 Per-NS Atomic Units: No 00:10:33.536 Maximum Single Source Range Length: 128 00:10:33.536 Maximum Copy Length: 128 00:10:33.536 Maximum Source Range Count: 128 00:10:33.536 NGUID/EUI64 Never Reused: No 00:10:33.536 Namespace Write Protected: No 00:10:33.536 Endurance group ID: 1 00:10:33.536 Number of LBA Formats: 8 00:10:33.536 Current LBA Format: LBA Format #04 00:10:33.536 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:33.536 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:33.536 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:33.536 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:33.536 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:33.536 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:33.536 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:33.536 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:33.536 00:10:33.536 Get Feature FDP: 00:10:33.536 ================ 00:10:33.536 Enabled: Yes 00:10:33.536 FDP configuration index: 0 00:10:33.536 00:10:33.537 FDP configurations log page 00:10:33.537 =========================== 00:10:33.537 Number of FDP configurations: 1 00:10:33.537 Version: 0 00:10:33.537 Size: 112 00:10:33.537 FDP Configuration Descriptor: 0 00:10:33.537 Descriptor Size: 96 00:10:33.537 Reclaim Group Identifier format: 2 00:10:33.537 FDP Volatile Write Cache: Not Present 00:10:33.537 FDP Configuration: Valid 00:10:33.537 Vendor Specific Size: 0 00:10:33.537 Number of Reclaim Groups: 2 00:10:33.537 Number of Recalim Unit Handles: 8 00:10:33.537 Max Placement Identifiers: 128 00:10:33.537 Number of Namespaces Suppprted: 256 00:10:33.537 Reclaim unit Nominal Size: 6000000 bytes 00:10:33.537 Estimated Reclaim Unit Time Limit: Not Reported 00:10:33.537 RUH Desc #000: RUH Type: Initially Isolated 00:10:33.537 RUH Desc #001: RUH Type: Initially Isolated 00:10:33.537 RUH Desc #002: RUH Type: Initially Isolated 00:10:33.537 RUH Desc #003: RUH Type: Initially Isolated 00:10:33.537 RUH Desc #004: RUH Type: Initially Isolated 00:10:33.537 RUH Desc #005: RUH Type: Initially Isolated 00:10:33.537 RUH Desc #006: RUH Type: Initially Isolated 00:10:33.537 RUH Desc #007: RUH Type: Initially Isolated 00:10:33.537 00:10:33.537 FDP reclaim unit handle usage log page 00:10:33.537 ====================================== 00:10:33.537 Number of Reclaim Unit Handles: 8 00:10:33.537 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:33.537 RUH Usage Desc #001: RUH Attributes: Unused 00:10:33.537 RUH Usage Desc #002: RUH Attributes: Unused 00:10:33.537 RUH Usage Desc #003: RUH Attributes: Unused 00:10:33.537 RUH Usage Desc #004: RUH Attributes: Unused 00:10:33.537 RUH Usage Desc #005: RUH Attributes: Unused 00:10:33.537 RUH Usage Desc #006: RUH Attributes: Unused 00:10:33.537 RUH Usage Desc #007: RUH Attributes: Unused 00:10:33.537 00:10:33.537 FDP statistics log page 00:10:33.537 ======================= 00:10:33.537 Host bytes with metadata written: 390017024 00:10:33.537 Media bytes with metadata written: 390090752 00:10:33.537 Media bytes erased: 0 00:10:33.537 00:10:33.537 FDP events log page 00:10:33.537 =================== 00:10:33.537 Number of FDP events: 0 00:10:33.537 00:10:33.537 00:10:33.537 real 0m1.546s 00:10:33.537 user 0m0.610s 00:10:33.537 sys 0m0.735s 00:10:33.537 19:11:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:33.537 19:11:10 -- common/autotest_common.sh@10 -- # set +x 00:10:33.537 ************************************ 00:10:33.537 END TEST nvme_identify 00:10:33.537 ************************************ 00:10:33.537 19:11:10 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:33.537 19:11:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:33.537 19:11:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:33.537 19:11:10 -- common/autotest_common.sh@10 -- # set +x 00:10:33.537 ************************************ 00:10:33.537 START TEST nvme_perf 00:10:33.537 ************************************ 00:10:33.537 19:11:10 -- common/autotest_common.sh@1102 -- # nvme_perf 00:10:33.537 19:11:10 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:34.929 Initializing NVMe Controllers 00:10:34.929 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:34.929 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:34.929 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:34.929 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:34.929 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:34.929 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:34.929 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:34.929 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:34.929 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:34.929 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:34.929 Initialization complete. Launching workers. 00:10:34.929 ======================================================== 00:10:34.929 Latency(us) 00:10:34.929 Device Information : IOPS MiB/s Average min max 00:10:34.929 PCIE (0000:00:06.0) NSID 1 from core 0: 14881.09 174.39 8596.06 6533.05 35980.45 00:10:34.929 PCIE (0000:00:07.0) NSID 1 from core 0: 14881.09 174.39 8586.40 6726.39 34554.42 00:10:34.929 PCIE (0000:00:09.0) NSID 1 from core 0: 14881.09 174.39 8575.79 6840.73 33653.21 00:10:34.929 PCIE (0000:00:08.0) NSID 1 from core 0: 14881.09 174.39 8563.92 6795.88 32128.46 00:10:34.929 PCIE (0000:00:08.0) NSID 2 from core 0: 14881.09 174.39 8552.45 6784.82 30454.31 00:10:34.929 PCIE (0000:00:08.0) NSID 3 from core 0: 14881.09 174.39 8540.67 6714.46 28832.30 00:10:34.929 ======================================================== 00:10:34.929 Total : 89286.53 1046.33 8569.21 6533.05 35980.45 00:10:34.929 00:10:34.929 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:34.929 ================================================================================= 00:10:34.929 1.00000% : 7000.436us 00:10:34.930 10.00000% : 7387.695us 00:10:34.930 25.00000% : 7804.742us 00:10:34.930 50.00000% : 8400.524us 00:10:34.930 75.00000% : 8996.305us 00:10:34.930 90.00000% : 9413.353us 00:10:34.930 95.00000% : 9711.244us 00:10:34.930 98.00000% : 10426.182us 00:10:34.930 99.00000% : 12570.996us 00:10:34.930 99.50000% : 33363.782us 00:10:34.930 99.90000% : 35508.596us 00:10:34.930 99.99000% : 35985.222us 00:10:34.930 99.99900% : 35985.222us 00:10:34.930 99.99990% : 35985.222us 00:10:34.930 99.99999% : 35985.222us 00:10:34.930 00:10:34.930 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:34.930 ================================================================================= 00:10:34.930 1.00000% : 7179.171us 00:10:34.930 10.00000% : 7506.851us 00:10:34.930 25.00000% : 7864.320us 00:10:34.930 50.00000% : 8400.524us 00:10:34.930 75.00000% : 8877.149us 00:10:34.930 90.00000% : 9294.196us 00:10:34.930 95.00000% : 9592.087us 00:10:34.930 98.00000% : 10366.604us 00:10:34.930 99.00000% : 12034.793us 00:10:34.930 99.50000% : 32172.218us 00:10:34.930 99.90000% : 34317.033us 00:10:34.930 99.99000% : 34555.345us 00:10:34.930 99.99900% : 34555.345us 00:10:34.930 99.99990% : 34555.345us 00:10:34.930 99.99999% : 34555.345us 00:10:34.930 00:10:34.930 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:34.930 ================================================================================= 00:10:34.930 1.00000% : 7149.382us 00:10:34.930 10.00000% : 7506.851us 00:10:34.930 25.00000% : 7864.320us 00:10:34.930 50.00000% : 8400.524us 00:10:34.930 75.00000% : 8877.149us 00:10:34.930 90.00000% : 9294.196us 00:10:34.930 95.00000% : 9592.087us 00:10:34.930 98.00000% : 10307.025us 00:10:34.930 99.00000% : 12213.527us 00:10:34.930 99.50000% : 31218.967us 00:10:34.930 99.90000% : 33363.782us 00:10:34.930 99.99000% : 33840.407us 00:10:34.930 99.99900% : 33840.407us 00:10:34.930 99.99990% : 33840.407us 00:10:34.930 99.99999% : 33840.407us 00:10:34.930 00:10:34.930 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:34.930 ================================================================================= 00:10:34.930 1.00000% : 7179.171us 00:10:34.930 10.00000% : 7506.851us 00:10:34.930 25.00000% : 7864.320us 00:10:34.930 50.00000% : 8400.524us 00:10:34.930 75.00000% : 8877.149us 00:10:34.930 90.00000% : 9294.196us 00:10:34.930 95.00000% : 9592.087us 00:10:34.930 98.00000% : 10366.604us 00:10:34.930 99.00000% : 11736.902us 00:10:34.930 99.50000% : 29669.935us 00:10:34.930 99.90000% : 31695.593us 00:10:34.930 99.99000% : 32172.218us 00:10:34.930 99.99900% : 32172.218us 00:10:34.930 99.99990% : 32172.218us 00:10:34.930 99.99999% : 32172.218us 00:10:34.930 00:10:34.930 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:34.930 ================================================================================= 00:10:34.930 1.00000% : 7179.171us 00:10:34.930 10.00000% : 7536.640us 00:10:34.930 25.00000% : 7864.320us 00:10:34.930 50.00000% : 8400.524us 00:10:34.930 75.00000% : 8877.149us 00:10:34.930 90.00000% : 9294.196us 00:10:34.930 95.00000% : 9592.087us 00:10:34.930 98.00000% : 10307.025us 00:10:34.930 99.00000% : 12034.793us 00:10:34.930 99.50000% : 28240.058us 00:10:34.930 99.90000% : 30146.560us 00:10:34.930 99.99000% : 30504.029us 00:10:34.930 99.99900% : 30504.029us 00:10:34.930 99.99990% : 30504.029us 00:10:34.930 99.99999% : 30504.029us 00:10:34.930 00:10:34.930 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:34.930 ================================================================================= 00:10:34.930 1.00000% : 7179.171us 00:10:34.930 10.00000% : 7536.640us 00:10:34.930 25.00000% : 7864.320us 00:10:34.930 50.00000% : 8400.524us 00:10:34.930 75.00000% : 8877.149us 00:10:34.930 90.00000% : 9234.618us 00:10:34.930 95.00000% : 9592.087us 00:10:34.930 98.00000% : 10366.604us 00:10:34.930 99.00000% : 12630.575us 00:10:34.930 99.50000% : 26571.869us 00:10:34.930 99.90000% : 28478.371us 00:10:34.930 99.99000% : 28835.840us 00:10:34.930 99.99900% : 28835.840us 00:10:34.930 99.99990% : 28835.840us 00:10:34.930 99.99999% : 28835.840us 00:10:34.930 00:10:34.930 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:34.930 ============================================================================== 00:10:34.930 Range in us Cumulative IO count 00:10:34.930 6523.811 - 6553.600: 0.0067% ( 1) 00:10:34.930 6553.600 - 6583.389: 0.0134% ( 1) 00:10:34.930 6583.389 - 6613.178: 0.0200% ( 1) 00:10:34.930 6613.178 - 6642.967: 0.0334% ( 2) 00:10:34.930 6642.967 - 6672.756: 0.0401% ( 1) 00:10:34.930 6672.756 - 6702.545: 0.0601% ( 3) 00:10:34.930 6702.545 - 6732.335: 0.0868% ( 4) 00:10:34.930 6732.335 - 6762.124: 0.1202% ( 5) 00:10:34.930 6762.124 - 6791.913: 0.1669% ( 7) 00:10:34.930 6791.913 - 6821.702: 0.2270% ( 9) 00:10:34.930 6821.702 - 6851.491: 0.3005% ( 11) 00:10:34.930 6851.491 - 6881.280: 0.4140% ( 17) 00:10:34.930 6881.280 - 6911.069: 0.5342% ( 18) 00:10:34.930 6911.069 - 6940.858: 0.7345% ( 30) 00:10:34.930 6940.858 - 6970.647: 0.9615% ( 34) 00:10:34.930 6970.647 - 7000.436: 1.2286% ( 40) 00:10:34.930 7000.436 - 7030.225: 1.5759% ( 52) 00:10:34.930 7030.225 - 7060.015: 1.9965% ( 63) 00:10:34.930 7060.015 - 7089.804: 2.5240% ( 79) 00:10:34.930 7089.804 - 7119.593: 3.1584% ( 95) 00:10:34.930 7119.593 - 7149.382: 3.7794% ( 93) 00:10:34.930 7149.382 - 7179.171: 4.4805% ( 105) 00:10:34.930 7179.171 - 7208.960: 5.1883% ( 106) 00:10:34.930 7208.960 - 7238.749: 6.0764% ( 133) 00:10:34.930 7238.749 - 7268.538: 6.9378% ( 129) 00:10:34.930 7268.538 - 7298.327: 7.9594% ( 153) 00:10:34.930 7298.327 - 7328.116: 8.8942% ( 140) 00:10:34.930 7328.116 - 7357.905: 9.9159% ( 153) 00:10:34.930 7357.905 - 7387.695: 11.0110% ( 164) 00:10:34.930 7387.695 - 7417.484: 12.0660% ( 158) 00:10:34.930 7417.484 - 7447.273: 13.2078% ( 171) 00:10:34.930 7447.273 - 7477.062: 14.3029% ( 164) 00:10:34.930 7477.062 - 7506.851: 15.4514% ( 172) 00:10:34.930 7506.851 - 7536.640: 16.5799% ( 169) 00:10:34.930 7536.640 - 7566.429: 17.7417% ( 174) 00:10:34.930 7566.429 - 7596.218: 18.8568% ( 167) 00:10:34.930 7596.218 - 7626.007: 20.0988% ( 186) 00:10:34.930 7626.007 - 7685.585: 22.3958% ( 344) 00:10:34.930 7685.585 - 7745.164: 24.7196% ( 348) 00:10:34.930 7745.164 - 7804.742: 26.9965% ( 341) 00:10:34.930 7804.742 - 7864.320: 29.4404% ( 366) 00:10:34.930 7864.320 - 7923.898: 31.8042% ( 354) 00:10:34.930 7923.898 - 7983.476: 34.2815% ( 371) 00:10:34.930 7983.476 - 8043.055: 36.7054% ( 363) 00:10:34.930 8043.055 - 8102.633: 39.1426% ( 365) 00:10:34.930 8102.633 - 8162.211: 41.6266% ( 372) 00:10:34.930 8162.211 - 8221.789: 44.0638% ( 365) 00:10:34.930 8221.789 - 8281.367: 46.6546% ( 388) 00:10:34.930 8281.367 - 8340.945: 49.1587% ( 375) 00:10:34.930 8340.945 - 8400.524: 51.5425% ( 357) 00:10:34.930 8400.524 - 8460.102: 54.0198% ( 371) 00:10:34.930 8460.102 - 8519.680: 56.5572% ( 380) 00:10:34.930 8519.680 - 8579.258: 59.0745% ( 377) 00:10:34.930 8579.258 - 8638.836: 61.5451% ( 370) 00:10:34.930 8638.836 - 8698.415: 64.0825% ( 380) 00:10:34.930 8698.415 - 8757.993: 66.6266% ( 381) 00:10:34.930 8757.993 - 8817.571: 69.1907% ( 384) 00:10:34.930 8817.571 - 8877.149: 71.5812% ( 358) 00:10:34.930 8877.149 - 8936.727: 74.0585% ( 371) 00:10:34.930 8936.727 - 8996.305: 76.5558% ( 374) 00:10:34.930 8996.305 - 9055.884: 79.0064% ( 367) 00:10:34.930 9055.884 - 9115.462: 81.3301% ( 348) 00:10:34.930 9115.462 - 9175.040: 83.6338% ( 345) 00:10:34.930 9175.040 - 9234.618: 85.7171% ( 312) 00:10:34.930 9234.618 - 9294.196: 87.6936% ( 296) 00:10:34.930 9294.196 - 9353.775: 89.4899% ( 269) 00:10:34.930 9353.775 - 9413.353: 90.9388% ( 217) 00:10:34.930 9413.353 - 9472.931: 92.2409% ( 195) 00:10:34.930 9472.931 - 9532.509: 93.2759% ( 155) 00:10:34.930 9532.509 - 9592.087: 94.1239% ( 127) 00:10:34.930 9592.087 - 9651.665: 94.7917% ( 100) 00:10:34.930 9651.665 - 9711.244: 95.3325% ( 81) 00:10:34.930 9711.244 - 9770.822: 95.8534% ( 78) 00:10:34.930 9770.822 - 9830.400: 96.2073% ( 53) 00:10:34.930 9830.400 - 9889.978: 96.4810% ( 41) 00:10:34.930 9889.978 - 9949.556: 96.7748% ( 44) 00:10:34.930 9949.556 - 10009.135: 96.9551% ( 27) 00:10:34.930 10009.135 - 10068.713: 97.1421% ( 28) 00:10:34.930 10068.713 - 10128.291: 97.3157% ( 26) 00:10:34.930 10128.291 - 10187.869: 97.4960% ( 27) 00:10:34.930 10187.869 - 10247.447: 97.6429% ( 22) 00:10:34.930 10247.447 - 10307.025: 97.8098% ( 25) 00:10:34.930 10307.025 - 10366.604: 97.9434% ( 20) 00:10:34.930 10366.604 - 10426.182: 98.0168% ( 11) 00:10:34.930 10426.182 - 10485.760: 98.0836% ( 10) 00:10:34.930 10485.760 - 10545.338: 98.1504% ( 10) 00:10:34.930 10545.338 - 10604.916: 98.2171% ( 10) 00:10:34.930 10604.916 - 10664.495: 98.2639% ( 7) 00:10:34.930 10664.495 - 10724.073: 98.3040% ( 6) 00:10:34.930 10724.073 - 10783.651: 98.3440% ( 6) 00:10:34.930 10783.651 - 10843.229: 98.3707% ( 4) 00:10:34.930 10843.229 - 10902.807: 98.4175% ( 7) 00:10:34.930 10902.807 - 10962.385: 98.4575% ( 6) 00:10:34.930 10962.385 - 11021.964: 98.4709% ( 2) 00:10:34.930 11021.964 - 11081.542: 98.4976% ( 4) 00:10:34.930 11081.542 - 11141.120: 98.5110% ( 2) 00:10:34.930 11141.120 - 11200.698: 98.5377% ( 4) 00:10:34.930 11200.698 - 11260.276: 98.5644% ( 4) 00:10:34.930 11260.276 - 11319.855: 98.5844% ( 3) 00:10:34.931 11319.855 - 11379.433: 98.6044% ( 3) 00:10:34.931 11379.433 - 11439.011: 98.6245% ( 3) 00:10:34.931 11439.011 - 11498.589: 98.6445% ( 3) 00:10:34.931 11498.589 - 11558.167: 98.6645% ( 3) 00:10:34.931 11558.167 - 11617.745: 98.6846% ( 3) 00:10:34.931 11617.745 - 11677.324: 98.7046% ( 3) 00:10:34.931 11677.324 - 11736.902: 98.7246% ( 3) 00:10:34.931 11736.902 - 11796.480: 98.7447% ( 3) 00:10:34.931 11796.480 - 11856.058: 98.7580% ( 2) 00:10:34.931 11856.058 - 11915.636: 98.7847% ( 4) 00:10:34.931 11915.636 - 11975.215: 98.8048% ( 3) 00:10:34.931 11975.215 - 12034.793: 98.8248% ( 3) 00:10:34.931 12034.793 - 12094.371: 98.8515% ( 4) 00:10:34.931 12094.371 - 12153.949: 98.8649% ( 2) 00:10:34.931 12153.949 - 12213.527: 98.8916% ( 4) 00:10:34.931 12213.527 - 12273.105: 98.9183% ( 4) 00:10:34.931 12273.105 - 12332.684: 98.9316% ( 2) 00:10:34.931 12332.684 - 12392.262: 98.9517% ( 3) 00:10:34.931 12392.262 - 12451.840: 98.9717% ( 3) 00:10:34.931 12451.840 - 12511.418: 98.9984% ( 4) 00:10:34.931 12511.418 - 12570.996: 99.0184% ( 3) 00:10:34.931 12570.996 - 12630.575: 99.0385% ( 3) 00:10:34.931 12630.575 - 12690.153: 99.0652% ( 4) 00:10:34.931 12690.153 - 12749.731: 99.0852% ( 3) 00:10:34.931 12749.731 - 12809.309: 99.1052% ( 3) 00:10:34.931 12809.309 - 12868.887: 99.1319% ( 4) 00:10:34.931 12868.887 - 12928.465: 99.1453% ( 2) 00:10:34.931 31218.967 - 31457.280: 99.1787% ( 5) 00:10:34.931 31457.280 - 31695.593: 99.2254% ( 7) 00:10:34.931 31695.593 - 31933.905: 99.2655% ( 6) 00:10:34.931 31933.905 - 32172.218: 99.2989% ( 5) 00:10:34.931 32172.218 - 32410.531: 99.3456% ( 7) 00:10:34.931 32410.531 - 32648.844: 99.3857% ( 6) 00:10:34.931 32648.844 - 32887.156: 99.4324% ( 7) 00:10:34.931 32887.156 - 33125.469: 99.4725% ( 6) 00:10:34.931 33125.469 - 33363.782: 99.5192% ( 7) 00:10:34.931 33363.782 - 33602.095: 99.5393% ( 3) 00:10:34.931 33602.095 - 33840.407: 99.5793% ( 6) 00:10:34.931 33840.407 - 34078.720: 99.6327% ( 8) 00:10:34.931 34078.720 - 34317.033: 99.6795% ( 7) 00:10:34.931 34317.033 - 34555.345: 99.7262% ( 7) 00:10:34.931 34555.345 - 34793.658: 99.7663% ( 6) 00:10:34.931 34793.658 - 35031.971: 99.8130% ( 7) 00:10:34.931 35031.971 - 35270.284: 99.8531% ( 6) 00:10:34.931 35270.284 - 35508.596: 99.9065% ( 8) 00:10:34.931 35508.596 - 35746.909: 99.9599% ( 8) 00:10:34.931 35746.909 - 35985.222: 100.0000% ( 6) 00:10:34.931 00:10:34.931 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:34.931 ============================================================================== 00:10:34.931 Range in us Cumulative IO count 00:10:34.931 6702.545 - 6732.335: 0.0067% ( 1) 00:10:34.931 6732.335 - 6762.124: 0.0134% ( 1) 00:10:34.931 6762.124 - 6791.913: 0.0334% ( 3) 00:10:34.931 6791.913 - 6821.702: 0.0467% ( 2) 00:10:34.931 6821.702 - 6851.491: 0.0668% ( 3) 00:10:34.931 6851.491 - 6881.280: 0.0801% ( 2) 00:10:34.931 6881.280 - 6911.069: 0.1002% ( 3) 00:10:34.931 6911.069 - 6940.858: 0.1335% ( 5) 00:10:34.931 6940.858 - 6970.647: 0.1803% ( 7) 00:10:34.931 6970.647 - 7000.436: 0.2270% ( 7) 00:10:34.931 7000.436 - 7030.225: 0.2871% ( 9) 00:10:34.931 7030.225 - 7060.015: 0.3873% ( 15) 00:10:34.931 7060.015 - 7089.804: 0.5142% ( 19) 00:10:34.931 7089.804 - 7119.593: 0.7011% ( 28) 00:10:34.931 7119.593 - 7149.382: 0.9615% ( 39) 00:10:34.931 7149.382 - 7179.171: 1.2754% ( 47) 00:10:34.931 7179.171 - 7208.960: 1.6493% ( 56) 00:10:34.931 7208.960 - 7238.749: 2.0566% ( 61) 00:10:34.931 7238.749 - 7268.538: 2.5708% ( 77) 00:10:34.931 7268.538 - 7298.327: 3.1784% ( 91) 00:10:34.931 7298.327 - 7328.116: 3.9196% ( 111) 00:10:34.931 7328.116 - 7357.905: 4.7409% ( 123) 00:10:34.931 7357.905 - 7387.695: 5.6357% ( 134) 00:10:34.931 7387.695 - 7417.484: 6.5839% ( 142) 00:10:34.931 7417.484 - 7447.273: 7.6522% ( 160) 00:10:34.931 7447.273 - 7477.062: 8.8608% ( 181) 00:10:34.931 7477.062 - 7506.851: 10.0628% ( 180) 00:10:34.931 7506.851 - 7536.640: 11.2914% ( 184) 00:10:34.931 7536.640 - 7566.429: 12.5735% ( 192) 00:10:34.931 7566.429 - 7596.218: 13.8555% ( 192) 00:10:34.931 7596.218 - 7626.007: 15.1108% ( 188) 00:10:34.931 7626.007 - 7685.585: 17.7951% ( 402) 00:10:34.931 7685.585 - 7745.164: 20.5262% ( 409) 00:10:34.931 7745.164 - 7804.742: 23.2706% ( 411) 00:10:34.931 7804.742 - 7864.320: 26.0417% ( 415) 00:10:34.931 7864.320 - 7923.898: 28.8862% ( 426) 00:10:34.931 7923.898 - 7983.476: 31.6707% ( 417) 00:10:34.931 7983.476 - 8043.055: 34.5887% ( 437) 00:10:34.931 8043.055 - 8102.633: 37.4866% ( 434) 00:10:34.931 8102.633 - 8162.211: 40.3846% ( 434) 00:10:34.931 8162.211 - 8221.789: 43.2425% ( 428) 00:10:34.931 8221.789 - 8281.367: 46.1672% ( 438) 00:10:34.931 8281.367 - 8340.945: 49.1720% ( 450) 00:10:34.931 8340.945 - 8400.524: 52.1568% ( 447) 00:10:34.931 8400.524 - 8460.102: 55.1616% ( 450) 00:10:34.931 8460.102 - 8519.680: 58.2065% ( 456) 00:10:34.931 8519.680 - 8579.258: 61.1712% ( 444) 00:10:34.931 8579.258 - 8638.836: 64.1426% ( 445) 00:10:34.931 8638.836 - 8698.415: 67.0940% ( 442) 00:10:34.931 8698.415 - 8757.993: 69.9519% ( 428) 00:10:34.931 8757.993 - 8817.571: 72.9100% ( 443) 00:10:34.931 8817.571 - 8877.149: 75.7278% ( 422) 00:10:34.931 8877.149 - 8936.727: 78.5590% ( 424) 00:10:34.931 8936.727 - 8996.305: 81.2300% ( 400) 00:10:34.931 8996.305 - 9055.884: 83.7941% ( 384) 00:10:34.931 9055.884 - 9115.462: 86.0911% ( 344) 00:10:34.931 9115.462 - 9175.040: 88.1544% ( 309) 00:10:34.931 9175.040 - 9234.618: 89.8972% ( 261) 00:10:34.931 9234.618 - 9294.196: 91.2861% ( 208) 00:10:34.931 9294.196 - 9353.775: 92.4546% ( 175) 00:10:34.931 9353.775 - 9413.353: 93.3894% ( 140) 00:10:34.931 9413.353 - 9472.931: 94.1907% ( 120) 00:10:34.931 9472.931 - 9532.509: 94.7917% ( 90) 00:10:34.931 9532.509 - 9592.087: 95.2791% ( 73) 00:10:34.931 9592.087 - 9651.665: 95.7198% ( 66) 00:10:34.931 9651.665 - 9711.244: 96.0470% ( 49) 00:10:34.931 9711.244 - 9770.822: 96.2941% ( 37) 00:10:34.931 9770.822 - 9830.400: 96.5478% ( 38) 00:10:34.931 9830.400 - 9889.978: 96.7882% ( 36) 00:10:34.931 9889.978 - 9949.556: 96.9752% ( 28) 00:10:34.931 9949.556 - 10009.135: 97.1688% ( 29) 00:10:34.931 10009.135 - 10068.713: 97.3558% ( 28) 00:10:34.931 10068.713 - 10128.291: 97.5361% ( 27) 00:10:34.931 10128.291 - 10187.869: 97.6696% ( 20) 00:10:34.931 10187.869 - 10247.447: 97.8098% ( 21) 00:10:34.931 10247.447 - 10307.025: 97.9100% ( 15) 00:10:34.931 10307.025 - 10366.604: 98.0101% ( 15) 00:10:34.931 10366.604 - 10426.182: 98.0836% ( 11) 00:10:34.931 10426.182 - 10485.760: 98.1504% ( 10) 00:10:34.931 10485.760 - 10545.338: 98.1971% ( 7) 00:10:34.931 10545.338 - 10604.916: 98.2572% ( 9) 00:10:34.931 10604.916 - 10664.495: 98.3106% ( 8) 00:10:34.931 10664.495 - 10724.073: 98.3640% ( 8) 00:10:34.931 10724.073 - 10783.651: 98.4175% ( 8) 00:10:34.931 10783.651 - 10843.229: 98.4575% ( 6) 00:10:34.931 10843.229 - 10902.807: 98.4909% ( 5) 00:10:34.931 10902.807 - 10962.385: 98.5243% ( 5) 00:10:34.931 10962.385 - 11021.964: 98.5577% ( 5) 00:10:34.931 11021.964 - 11081.542: 98.5911% ( 5) 00:10:34.931 11081.542 - 11141.120: 98.6178% ( 4) 00:10:34.931 11141.120 - 11200.698: 98.6512% ( 5) 00:10:34.931 11200.698 - 11260.276: 98.6846% ( 5) 00:10:34.931 11260.276 - 11319.855: 98.7113% ( 4) 00:10:34.931 11319.855 - 11379.433: 98.7447% ( 5) 00:10:34.931 11379.433 - 11439.011: 98.7780% ( 5) 00:10:34.931 11439.011 - 11498.589: 98.8114% ( 5) 00:10:34.931 11498.589 - 11558.167: 98.8381% ( 4) 00:10:34.931 11558.167 - 11617.745: 98.8782% ( 6) 00:10:34.931 11617.745 - 11677.324: 98.9049% ( 4) 00:10:34.931 11677.324 - 11736.902: 98.9316% ( 4) 00:10:34.931 11736.902 - 11796.480: 98.9517% ( 3) 00:10:34.931 11796.480 - 11856.058: 98.9650% ( 2) 00:10:34.931 11856.058 - 11915.636: 98.9850% ( 3) 00:10:34.931 11915.636 - 11975.215: 98.9984% ( 2) 00:10:34.931 11975.215 - 12034.793: 99.0184% ( 3) 00:10:34.931 12034.793 - 12094.371: 99.0318% ( 2) 00:10:34.931 12094.371 - 12153.949: 99.0518% ( 3) 00:10:34.931 12153.949 - 12213.527: 99.0652% ( 2) 00:10:34.931 12213.527 - 12273.105: 99.0785% ( 2) 00:10:34.931 12273.105 - 12332.684: 99.0986% ( 3) 00:10:34.931 12332.684 - 12392.262: 99.1119% ( 2) 00:10:34.931 12392.262 - 12451.840: 99.1319% ( 3) 00:10:34.931 12451.840 - 12511.418: 99.1386% ( 1) 00:10:34.931 12511.418 - 12570.996: 99.1453% ( 1) 00:10:34.931 30146.560 - 30265.716: 99.1653% ( 3) 00:10:34.931 30265.716 - 30384.873: 99.1920% ( 4) 00:10:34.931 30384.873 - 30504.029: 99.2188% ( 4) 00:10:34.931 30504.029 - 30742.342: 99.2588% ( 6) 00:10:34.931 30742.342 - 30980.655: 99.3056% ( 7) 00:10:34.931 30980.655 - 31218.967: 99.3590% ( 8) 00:10:34.931 31218.967 - 31457.280: 99.3990% ( 6) 00:10:34.931 31457.280 - 31695.593: 99.4458% ( 7) 00:10:34.931 31695.593 - 31933.905: 99.4925% ( 7) 00:10:34.931 31933.905 - 32172.218: 99.5393% ( 7) 00:10:34.931 32172.218 - 32410.531: 99.5793% ( 6) 00:10:34.931 32410.531 - 32648.844: 99.6261% ( 7) 00:10:34.931 32648.844 - 32887.156: 99.6661% ( 6) 00:10:34.931 32887.156 - 33125.469: 99.7129% ( 7) 00:10:34.931 33125.469 - 33363.782: 99.7596% ( 7) 00:10:34.931 33363.782 - 33602.095: 99.8064% ( 7) 00:10:34.931 33602.095 - 33840.407: 99.8531% ( 7) 00:10:34.931 33840.407 - 34078.720: 99.8998% ( 7) 00:10:34.932 34078.720 - 34317.033: 99.9533% ( 8) 00:10:34.932 34317.033 - 34555.345: 100.0000% ( 7) 00:10:34.932 00:10:34.932 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:34.932 ============================================================================== 00:10:34.932 Range in us Cumulative IO count 00:10:34.932 6821.702 - 6851.491: 0.0067% ( 1) 00:10:34.932 6851.491 - 6881.280: 0.0334% ( 4) 00:10:34.932 6881.280 - 6911.069: 0.0668% ( 5) 00:10:34.932 6911.069 - 6940.858: 0.1135% ( 7) 00:10:34.932 6940.858 - 6970.647: 0.1803% ( 10) 00:10:34.932 6970.647 - 7000.436: 0.2537% ( 11) 00:10:34.932 7000.436 - 7030.225: 0.3472% ( 14) 00:10:34.932 7030.225 - 7060.015: 0.4474% ( 15) 00:10:34.932 7060.015 - 7089.804: 0.6076% ( 24) 00:10:34.932 7089.804 - 7119.593: 0.7812% ( 26) 00:10:34.932 7119.593 - 7149.382: 1.0083% ( 34) 00:10:34.932 7149.382 - 7179.171: 1.3355% ( 49) 00:10:34.932 7179.171 - 7208.960: 1.7428% ( 61) 00:10:34.932 7208.960 - 7238.749: 2.2636% ( 78) 00:10:34.932 7238.749 - 7268.538: 2.8512% ( 88) 00:10:34.932 7268.538 - 7298.327: 3.5524% ( 105) 00:10:34.932 7298.327 - 7328.116: 4.2869% ( 110) 00:10:34.932 7328.116 - 7357.905: 5.1082% ( 123) 00:10:34.932 7357.905 - 7387.695: 6.0096% ( 135) 00:10:34.932 7387.695 - 7417.484: 6.9444% ( 140) 00:10:34.932 7417.484 - 7447.273: 8.0863% ( 171) 00:10:34.932 7447.273 - 7477.062: 9.2548% ( 175) 00:10:34.932 7477.062 - 7506.851: 10.4501% ( 179) 00:10:34.932 7506.851 - 7536.640: 11.5986% ( 172) 00:10:34.932 7536.640 - 7566.429: 12.8005% ( 180) 00:10:34.932 7566.429 - 7596.218: 14.0491% ( 187) 00:10:34.932 7596.218 - 7626.007: 15.3579% ( 196) 00:10:34.932 7626.007 - 7685.585: 18.0155% ( 398) 00:10:34.932 7685.585 - 7745.164: 20.7799% ( 414) 00:10:34.932 7745.164 - 7804.742: 23.5978% ( 422) 00:10:34.932 7804.742 - 7864.320: 26.4223% ( 423) 00:10:34.932 7864.320 - 7923.898: 29.1600% ( 410) 00:10:34.932 7923.898 - 7983.476: 31.9979% ( 425) 00:10:34.932 7983.476 - 8043.055: 34.8424% ( 426) 00:10:34.932 8043.055 - 8102.633: 37.6335% ( 418) 00:10:34.932 8102.633 - 8162.211: 40.5048% ( 430) 00:10:34.932 8162.211 - 8221.789: 43.3560% ( 427) 00:10:34.932 8221.789 - 8281.367: 46.2407% ( 432) 00:10:34.932 8281.367 - 8340.945: 49.2722% ( 454) 00:10:34.932 8340.945 - 8400.524: 52.2102% ( 440) 00:10:34.932 8400.524 - 8460.102: 55.1482% ( 440) 00:10:34.932 8460.102 - 8519.680: 58.1464% ( 449) 00:10:34.932 8519.680 - 8579.258: 61.0844% ( 440) 00:10:34.932 8579.258 - 8638.836: 64.0358% ( 442) 00:10:34.932 8638.836 - 8698.415: 67.0807% ( 456) 00:10:34.932 8698.415 - 8757.993: 69.9519% ( 430) 00:10:34.932 8757.993 - 8817.571: 72.8966% ( 441) 00:10:34.932 8817.571 - 8877.149: 75.8681% ( 445) 00:10:34.932 8877.149 - 8936.727: 78.5457% ( 401) 00:10:34.932 8936.727 - 8996.305: 81.3502% ( 420) 00:10:34.932 8996.305 - 9055.884: 83.9009% ( 382) 00:10:34.932 9055.884 - 9115.462: 86.1846% ( 342) 00:10:34.932 9115.462 - 9175.040: 88.1544% ( 295) 00:10:34.932 9175.040 - 9234.618: 89.9372% ( 267) 00:10:34.932 9234.618 - 9294.196: 91.4730% ( 230) 00:10:34.932 9294.196 - 9353.775: 92.6149% ( 171) 00:10:34.932 9353.775 - 9413.353: 93.4896% ( 131) 00:10:34.932 9413.353 - 9472.931: 94.1506% ( 99) 00:10:34.932 9472.931 - 9532.509: 94.6982% ( 82) 00:10:34.932 9532.509 - 9592.087: 95.1723% ( 71) 00:10:34.932 9592.087 - 9651.665: 95.5262% ( 53) 00:10:34.932 9651.665 - 9711.244: 95.8267% ( 45) 00:10:34.932 9711.244 - 9770.822: 96.1205% ( 44) 00:10:34.932 9770.822 - 9830.400: 96.3876% ( 40) 00:10:34.932 9830.400 - 9889.978: 96.6747% ( 43) 00:10:34.932 9889.978 - 9949.556: 96.9151% ( 36) 00:10:34.932 9949.556 - 10009.135: 97.1488% ( 35) 00:10:34.932 10009.135 - 10068.713: 97.3758% ( 34) 00:10:34.932 10068.713 - 10128.291: 97.5962% ( 33) 00:10:34.932 10128.291 - 10187.869: 97.7631% ( 25) 00:10:34.932 10187.869 - 10247.447: 97.9167% ( 23) 00:10:34.932 10247.447 - 10307.025: 98.0168% ( 15) 00:10:34.932 10307.025 - 10366.604: 98.1170% ( 15) 00:10:34.932 10366.604 - 10426.182: 98.1904% ( 11) 00:10:34.932 10426.182 - 10485.760: 98.2305% ( 6) 00:10:34.932 10485.760 - 10545.338: 98.2639% ( 5) 00:10:34.932 10545.338 - 10604.916: 98.3040% ( 6) 00:10:34.932 10604.916 - 10664.495: 98.3373% ( 5) 00:10:34.932 10664.495 - 10724.073: 98.3774% ( 6) 00:10:34.932 10724.073 - 10783.651: 98.4108% ( 5) 00:10:34.932 10783.651 - 10843.229: 98.4442% ( 5) 00:10:34.932 10843.229 - 10902.807: 98.4842% ( 6) 00:10:34.932 10902.807 - 10962.385: 98.5110% ( 4) 00:10:34.932 10962.385 - 11021.964: 98.5510% ( 6) 00:10:34.932 11021.964 - 11081.542: 98.5911% ( 6) 00:10:34.932 11081.542 - 11141.120: 98.6178% ( 4) 00:10:34.932 11141.120 - 11200.698: 98.6512% ( 5) 00:10:34.932 11200.698 - 11260.276: 98.6846% ( 5) 00:10:34.932 11260.276 - 11319.855: 98.7179% ( 5) 00:10:34.932 11319.855 - 11379.433: 98.7580% ( 6) 00:10:34.932 11379.433 - 11439.011: 98.7914% ( 5) 00:10:34.932 11439.011 - 11498.589: 98.8114% ( 3) 00:10:34.932 11498.589 - 11558.167: 98.8315% ( 3) 00:10:34.932 11558.167 - 11617.745: 98.8515% ( 3) 00:10:34.932 11617.745 - 11677.324: 98.8649% ( 2) 00:10:34.932 11677.324 - 11736.902: 98.8782% ( 2) 00:10:34.932 11736.902 - 11796.480: 98.8849% ( 1) 00:10:34.932 11796.480 - 11856.058: 98.9049% ( 3) 00:10:34.932 11856.058 - 11915.636: 98.9183% ( 2) 00:10:34.932 11915.636 - 11975.215: 98.9383% ( 3) 00:10:34.932 11975.215 - 12034.793: 98.9583% ( 3) 00:10:34.932 12034.793 - 12094.371: 98.9717% ( 2) 00:10:34.932 12094.371 - 12153.949: 98.9917% ( 3) 00:10:34.932 12153.949 - 12213.527: 99.0118% ( 3) 00:10:34.932 12213.527 - 12273.105: 99.0251% ( 2) 00:10:34.932 12273.105 - 12332.684: 99.0385% ( 2) 00:10:34.932 12332.684 - 12392.262: 99.0585% ( 3) 00:10:34.932 12392.262 - 12451.840: 99.0718% ( 2) 00:10:34.932 12451.840 - 12511.418: 99.0852% ( 2) 00:10:34.932 12511.418 - 12570.996: 99.1052% ( 3) 00:10:34.932 12570.996 - 12630.575: 99.1253% ( 3) 00:10:34.932 12630.575 - 12690.153: 99.1386% ( 2) 00:10:34.932 12690.153 - 12749.731: 99.1453% ( 1) 00:10:34.932 29193.309 - 29312.465: 99.1587% ( 2) 00:10:34.932 29312.465 - 29431.622: 99.1787% ( 3) 00:10:34.932 29431.622 - 29550.778: 99.2054% ( 4) 00:10:34.932 29550.778 - 29669.935: 99.2321% ( 4) 00:10:34.932 29669.935 - 29789.091: 99.2521% ( 3) 00:10:34.932 29789.091 - 29908.247: 99.2722% ( 3) 00:10:34.932 29908.247 - 30027.404: 99.2989% ( 4) 00:10:34.932 30027.404 - 30146.560: 99.3256% ( 4) 00:10:34.932 30146.560 - 30265.716: 99.3456% ( 3) 00:10:34.932 30265.716 - 30384.873: 99.3657% ( 3) 00:10:34.932 30384.873 - 30504.029: 99.3857% ( 3) 00:10:34.932 30504.029 - 30742.342: 99.4324% ( 7) 00:10:34.932 30742.342 - 30980.655: 99.4792% ( 7) 00:10:34.932 30980.655 - 31218.967: 99.5259% ( 7) 00:10:34.932 31218.967 - 31457.280: 99.5726% ( 7) 00:10:34.932 31457.280 - 31695.593: 99.6261% ( 8) 00:10:34.932 31695.593 - 31933.905: 99.6661% ( 6) 00:10:34.932 31933.905 - 32172.218: 99.7129% ( 7) 00:10:34.932 32172.218 - 32410.531: 99.7596% ( 7) 00:10:34.932 32410.531 - 32648.844: 99.8064% ( 7) 00:10:34.932 32648.844 - 32887.156: 99.8531% ( 7) 00:10:34.932 32887.156 - 33125.469: 99.8932% ( 6) 00:10:34.932 33125.469 - 33363.782: 99.9466% ( 8) 00:10:34.932 33363.782 - 33602.095: 99.9866% ( 6) 00:10:34.932 33602.095 - 33840.407: 100.0000% ( 2) 00:10:34.932 00:10:34.932 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:34.932 ============================================================================== 00:10:34.932 Range in us Cumulative IO count 00:10:34.932 6791.913 - 6821.702: 0.0134% ( 2) 00:10:34.932 6821.702 - 6851.491: 0.0267% ( 2) 00:10:34.932 6851.491 - 6881.280: 0.0401% ( 2) 00:10:34.932 6881.280 - 6911.069: 0.0601% ( 3) 00:10:34.932 6911.069 - 6940.858: 0.0801% ( 3) 00:10:34.932 6940.858 - 6970.647: 0.1135% ( 5) 00:10:34.932 6970.647 - 7000.436: 0.1669% ( 8) 00:10:34.932 7000.436 - 7030.225: 0.2537% ( 13) 00:10:34.932 7030.225 - 7060.015: 0.3272% ( 11) 00:10:34.932 7060.015 - 7089.804: 0.4340% ( 16) 00:10:34.932 7089.804 - 7119.593: 0.5676% ( 20) 00:10:34.932 7119.593 - 7149.382: 0.7479% ( 27) 00:10:34.932 7149.382 - 7179.171: 1.0684% ( 48) 00:10:34.932 7179.171 - 7208.960: 1.5024% ( 65) 00:10:34.932 7208.960 - 7238.749: 1.9899% ( 73) 00:10:34.932 7238.749 - 7268.538: 2.5574% ( 85) 00:10:34.932 7268.538 - 7298.327: 3.1985% ( 96) 00:10:34.932 7298.327 - 7328.116: 3.9597% ( 114) 00:10:34.932 7328.116 - 7357.905: 4.8210% ( 129) 00:10:34.932 7357.905 - 7387.695: 5.7759% ( 143) 00:10:34.932 7387.695 - 7417.484: 6.7975% ( 153) 00:10:34.932 7417.484 - 7447.273: 7.8526% ( 158) 00:10:34.932 7447.273 - 7477.062: 9.0011% ( 172) 00:10:34.932 7477.062 - 7506.851: 10.1362% ( 170) 00:10:34.932 7506.851 - 7536.640: 11.4316% ( 194) 00:10:34.932 7536.640 - 7566.429: 12.6870% ( 188) 00:10:34.932 7566.429 - 7596.218: 13.9490% ( 189) 00:10:34.932 7596.218 - 7626.007: 15.2644% ( 197) 00:10:34.932 7626.007 - 7685.585: 17.9354% ( 400) 00:10:34.932 7685.585 - 7745.164: 20.6263% ( 403) 00:10:34.932 7745.164 - 7804.742: 23.3507% ( 408) 00:10:34.932 7804.742 - 7864.320: 26.1151% ( 414) 00:10:34.932 7864.320 - 7923.898: 28.8996% ( 417) 00:10:34.932 7923.898 - 7983.476: 31.6907% ( 418) 00:10:34.932 7983.476 - 8043.055: 34.5286% ( 425) 00:10:34.932 8043.055 - 8102.633: 37.2930% ( 414) 00:10:34.932 8102.633 - 8162.211: 40.2577% ( 444) 00:10:34.933 8162.211 - 8221.789: 43.2225% ( 444) 00:10:34.933 8221.789 - 8281.367: 46.1405% ( 437) 00:10:34.933 8281.367 - 8340.945: 49.1520% ( 451) 00:10:34.933 8340.945 - 8400.524: 52.0967% ( 441) 00:10:34.933 8400.524 - 8460.102: 55.1015% ( 450) 00:10:34.933 8460.102 - 8519.680: 58.1130% ( 451) 00:10:34.933 8519.680 - 8579.258: 61.1645% ( 457) 00:10:34.933 8579.258 - 8638.836: 64.1092% ( 441) 00:10:34.933 8638.836 - 8698.415: 67.0807% ( 445) 00:10:34.933 8698.415 - 8757.993: 70.0721% ( 448) 00:10:34.933 8757.993 - 8817.571: 72.9501% ( 431) 00:10:34.933 8817.571 - 8877.149: 75.8614% ( 436) 00:10:34.933 8877.149 - 8936.727: 78.6859% ( 423) 00:10:34.933 8936.727 - 8996.305: 81.4236% ( 410) 00:10:34.933 8996.305 - 9055.884: 83.9410% ( 377) 00:10:34.933 9055.884 - 9115.462: 86.0844% ( 321) 00:10:34.933 9115.462 - 9175.040: 88.1544% ( 310) 00:10:34.933 9175.040 - 9234.618: 89.8838% ( 259) 00:10:34.933 9234.618 - 9294.196: 91.3528% ( 220) 00:10:34.933 9294.196 - 9353.775: 92.4613% ( 166) 00:10:34.933 9353.775 - 9413.353: 93.3360% ( 131) 00:10:34.933 9413.353 - 9472.931: 94.0438% ( 106) 00:10:34.933 9472.931 - 9532.509: 94.6047% ( 84) 00:10:34.933 9532.509 - 9592.087: 95.1055% ( 75) 00:10:34.933 9592.087 - 9651.665: 95.5395% ( 65) 00:10:34.933 9651.665 - 9711.244: 95.9068% ( 55) 00:10:34.933 9711.244 - 9770.822: 96.2206% ( 47) 00:10:34.933 9770.822 - 9830.400: 96.4944% ( 41) 00:10:34.933 9830.400 - 9889.978: 96.7748% ( 42) 00:10:34.933 9889.978 - 9949.556: 97.0152% ( 36) 00:10:34.933 9949.556 - 10009.135: 97.1955% ( 27) 00:10:34.933 10009.135 - 10068.713: 97.3624% ( 25) 00:10:34.933 10068.713 - 10128.291: 97.5093% ( 22) 00:10:34.933 10128.291 - 10187.869: 97.6629% ( 23) 00:10:34.933 10187.869 - 10247.447: 97.8232% ( 24) 00:10:34.933 10247.447 - 10307.025: 97.9434% ( 18) 00:10:34.933 10307.025 - 10366.604: 98.0636% ( 18) 00:10:34.933 10366.604 - 10426.182: 98.1838% ( 18) 00:10:34.933 10426.182 - 10485.760: 98.2772% ( 14) 00:10:34.933 10485.760 - 10545.338: 98.3307% ( 8) 00:10:34.933 10545.338 - 10604.916: 98.3774% ( 7) 00:10:34.933 10604.916 - 10664.495: 98.4108% ( 5) 00:10:34.933 10664.495 - 10724.073: 98.4509% ( 6) 00:10:34.933 10724.073 - 10783.651: 98.4842% ( 5) 00:10:34.933 10783.651 - 10843.229: 98.5176% ( 5) 00:10:34.933 10843.229 - 10902.807: 98.5510% ( 5) 00:10:34.933 10902.807 - 10962.385: 98.5844% ( 5) 00:10:34.933 10962.385 - 11021.964: 98.6178% ( 5) 00:10:34.933 11021.964 - 11081.542: 98.6579% ( 6) 00:10:34.933 11081.542 - 11141.120: 98.6846% ( 4) 00:10:34.933 11141.120 - 11200.698: 98.7179% ( 5) 00:10:34.933 11200.698 - 11260.276: 98.7580% ( 6) 00:10:34.933 11260.276 - 11319.855: 98.7981% ( 6) 00:10:34.933 11319.855 - 11379.433: 98.8315% ( 5) 00:10:34.933 11379.433 - 11439.011: 98.8649% ( 5) 00:10:34.933 11439.011 - 11498.589: 98.8916% ( 4) 00:10:34.933 11498.589 - 11558.167: 98.9249% ( 5) 00:10:34.933 11558.167 - 11617.745: 98.9583% ( 5) 00:10:34.933 11617.745 - 11677.324: 98.9984% ( 6) 00:10:34.933 11677.324 - 11736.902: 99.0385% ( 6) 00:10:34.933 11736.902 - 11796.480: 99.0652% ( 4) 00:10:34.933 11796.480 - 11856.058: 99.0986% ( 5) 00:10:34.933 11856.058 - 11915.636: 99.1186% ( 3) 00:10:34.933 11915.636 - 11975.215: 99.1386% ( 3) 00:10:34.933 11975.215 - 12034.793: 99.1453% ( 1) 00:10:34.933 27763.433 - 27882.589: 99.1653% ( 3) 00:10:34.933 27882.589 - 28001.745: 99.1920% ( 4) 00:10:34.933 28001.745 - 28120.902: 99.2188% ( 4) 00:10:34.933 28120.902 - 28240.058: 99.2388% ( 3) 00:10:34.933 28240.058 - 28359.215: 99.2588% ( 3) 00:10:34.933 28359.215 - 28478.371: 99.2788% ( 3) 00:10:34.933 28478.371 - 28597.527: 99.3056% ( 4) 00:10:34.933 28597.527 - 28716.684: 99.3323% ( 4) 00:10:34.933 28716.684 - 28835.840: 99.3523% ( 3) 00:10:34.933 28835.840 - 28954.996: 99.3790% ( 4) 00:10:34.933 28954.996 - 29074.153: 99.4057% ( 4) 00:10:34.933 29074.153 - 29193.309: 99.4324% ( 4) 00:10:34.933 29193.309 - 29312.465: 99.4525% ( 3) 00:10:34.933 29312.465 - 29431.622: 99.4725% ( 3) 00:10:34.933 29431.622 - 29550.778: 99.4992% ( 4) 00:10:34.933 29550.778 - 29669.935: 99.5259% ( 4) 00:10:34.933 29669.935 - 29789.091: 99.5459% ( 3) 00:10:34.933 29789.091 - 29908.247: 99.5660% ( 3) 00:10:34.933 29908.247 - 30027.404: 99.5927% ( 4) 00:10:34.933 30027.404 - 30146.560: 99.6194% ( 4) 00:10:34.933 30146.560 - 30265.716: 99.6394% ( 3) 00:10:34.933 30265.716 - 30384.873: 99.6595% ( 3) 00:10:34.933 30384.873 - 30504.029: 99.6795% ( 3) 00:10:34.933 30504.029 - 30742.342: 99.7262% ( 7) 00:10:34.933 30742.342 - 30980.655: 99.7796% ( 8) 00:10:34.933 30980.655 - 31218.967: 99.8130% ( 5) 00:10:34.933 31218.967 - 31457.280: 99.8598% ( 7) 00:10:34.933 31457.280 - 31695.593: 99.9132% ( 8) 00:10:34.933 31695.593 - 31933.905: 99.9599% ( 7) 00:10:34.933 31933.905 - 32172.218: 100.0000% ( 6) 00:10:34.933 00:10:34.933 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:34.933 ============================================================================== 00:10:34.933 Range in us Cumulative IO count 00:10:34.933 6762.124 - 6791.913: 0.0067% ( 1) 00:10:34.933 6791.913 - 6821.702: 0.0334% ( 4) 00:10:34.933 6821.702 - 6851.491: 0.0601% ( 4) 00:10:34.933 6851.491 - 6881.280: 0.0868% ( 4) 00:10:34.933 6881.280 - 6911.069: 0.1202% ( 5) 00:10:34.933 6911.069 - 6940.858: 0.1402% ( 3) 00:10:34.933 6940.858 - 6970.647: 0.1669% ( 4) 00:10:34.933 6970.647 - 7000.436: 0.2404% ( 11) 00:10:34.933 7000.436 - 7030.225: 0.3005% ( 9) 00:10:34.933 7030.225 - 7060.015: 0.3873% ( 13) 00:10:34.933 7060.015 - 7089.804: 0.4741% ( 13) 00:10:34.933 7089.804 - 7119.593: 0.5943% ( 18) 00:10:34.933 7119.593 - 7149.382: 0.8413% ( 37) 00:10:34.933 7149.382 - 7179.171: 1.1418% ( 45) 00:10:34.933 7179.171 - 7208.960: 1.4757% ( 50) 00:10:34.933 7208.960 - 7238.749: 1.9231% ( 67) 00:10:34.933 7238.749 - 7268.538: 2.4840% ( 84) 00:10:34.933 7268.538 - 7298.327: 3.1517% ( 100) 00:10:34.933 7298.327 - 7328.116: 3.8795% ( 109) 00:10:34.933 7328.116 - 7357.905: 4.7075% ( 124) 00:10:34.933 7357.905 - 7387.695: 5.6157% ( 136) 00:10:34.933 7387.695 - 7417.484: 6.5572% ( 141) 00:10:34.933 7417.484 - 7447.273: 7.6522% ( 164) 00:10:34.933 7447.273 - 7477.062: 8.7740% ( 168) 00:10:34.933 7477.062 - 7506.851: 9.9025% ( 169) 00:10:34.933 7506.851 - 7536.640: 11.1044% ( 180) 00:10:34.933 7536.640 - 7566.429: 12.3598% ( 188) 00:10:34.933 7566.429 - 7596.218: 13.6619% ( 195) 00:10:34.933 7596.218 - 7626.007: 14.9973% ( 200) 00:10:34.933 7626.007 - 7685.585: 17.6616% ( 399) 00:10:34.933 7685.585 - 7745.164: 20.3726% ( 406) 00:10:34.933 7745.164 - 7804.742: 23.1437% ( 415) 00:10:34.933 7804.742 - 7864.320: 25.9215% ( 416) 00:10:34.933 7864.320 - 7923.898: 28.7593% ( 425) 00:10:34.933 7923.898 - 7983.476: 31.5772% ( 422) 00:10:34.933 7983.476 - 8043.055: 34.4351% ( 428) 00:10:34.933 8043.055 - 8102.633: 37.2863% ( 427) 00:10:34.933 8102.633 - 8162.211: 40.1910% ( 435) 00:10:34.933 8162.211 - 8221.789: 43.1090% ( 437) 00:10:34.933 8221.789 - 8281.367: 46.0938% ( 447) 00:10:34.933 8281.367 - 8340.945: 49.0785% ( 447) 00:10:34.933 8340.945 - 8400.524: 52.1167% ( 455) 00:10:34.933 8400.524 - 8460.102: 55.0948% ( 446) 00:10:34.933 8460.102 - 8519.680: 58.0996% ( 450) 00:10:34.933 8519.680 - 8579.258: 61.1111% ( 451) 00:10:34.933 8579.258 - 8638.836: 64.1760% ( 459) 00:10:34.933 8638.836 - 8698.415: 67.1474% ( 445) 00:10:34.933 8698.415 - 8757.993: 70.0788% ( 439) 00:10:34.933 8757.993 - 8817.571: 73.0302% ( 442) 00:10:34.933 8817.571 - 8877.149: 75.9816% ( 442) 00:10:34.933 8877.149 - 8936.727: 78.7193% ( 410) 00:10:34.933 8936.727 - 8996.305: 81.5037% ( 417) 00:10:34.933 8996.305 - 9055.884: 84.0278% ( 378) 00:10:34.933 9055.884 - 9115.462: 86.2246% ( 329) 00:10:34.933 9115.462 - 9175.040: 88.2278% ( 300) 00:10:34.933 9175.040 - 9234.618: 89.9973% ( 265) 00:10:34.933 9234.618 - 9294.196: 91.5331% ( 230) 00:10:34.933 9294.196 - 9353.775: 92.6349% ( 165) 00:10:34.933 9353.775 - 9413.353: 93.4963% ( 129) 00:10:34.933 9413.353 - 9472.931: 94.1840% ( 103) 00:10:34.933 9472.931 - 9532.509: 94.7650% ( 87) 00:10:34.933 9532.509 - 9592.087: 95.2390% ( 71) 00:10:34.933 9592.087 - 9651.665: 95.6263% ( 58) 00:10:34.933 9651.665 - 9711.244: 95.9268% ( 45) 00:10:34.933 9711.244 - 9770.822: 96.2273% ( 45) 00:10:34.933 9770.822 - 9830.400: 96.5211% ( 44) 00:10:34.933 9830.400 - 9889.978: 96.7882% ( 40) 00:10:34.933 9889.978 - 9949.556: 97.0353% ( 37) 00:10:34.933 9949.556 - 10009.135: 97.2890% ( 38) 00:10:34.933 10009.135 - 10068.713: 97.5093% ( 33) 00:10:34.933 10068.713 - 10128.291: 97.6696% ( 24) 00:10:34.933 10128.291 - 10187.869: 97.8098% ( 21) 00:10:34.933 10187.869 - 10247.447: 97.9367% ( 19) 00:10:34.933 10247.447 - 10307.025: 98.0369% ( 15) 00:10:34.933 10307.025 - 10366.604: 98.1237% ( 13) 00:10:34.933 10366.604 - 10426.182: 98.2105% ( 13) 00:10:34.933 10426.182 - 10485.760: 98.2439% ( 5) 00:10:34.933 10485.760 - 10545.338: 98.2973% ( 8) 00:10:34.933 10545.338 - 10604.916: 98.3507% ( 8) 00:10:34.933 10604.916 - 10664.495: 98.4108% ( 9) 00:10:34.933 10664.495 - 10724.073: 98.4575% ( 7) 00:10:34.933 10724.073 - 10783.651: 98.5110% ( 8) 00:10:34.933 10783.651 - 10843.229: 98.5710% ( 9) 00:10:34.933 10843.229 - 10902.807: 98.6245% ( 8) 00:10:34.933 10902.807 - 10962.385: 98.6712% ( 7) 00:10:34.933 10962.385 - 11021.964: 98.7046% ( 5) 00:10:34.933 11021.964 - 11081.542: 98.7313% ( 4) 00:10:34.933 11081.542 - 11141.120: 98.7513% ( 3) 00:10:34.934 11141.120 - 11200.698: 98.7714% ( 3) 00:10:34.934 11200.698 - 11260.276: 98.7847% ( 2) 00:10:34.934 11260.276 - 11319.855: 98.8048% ( 3) 00:10:34.934 11319.855 - 11379.433: 98.8248% ( 3) 00:10:34.934 11379.433 - 11439.011: 98.8448% ( 3) 00:10:34.934 11439.011 - 11498.589: 98.8582% ( 2) 00:10:34.934 11498.589 - 11558.167: 98.8715% ( 2) 00:10:34.934 11558.167 - 11617.745: 98.8916% ( 3) 00:10:34.934 11617.745 - 11677.324: 98.9116% ( 3) 00:10:34.934 11677.324 - 11736.902: 98.9249% ( 2) 00:10:34.934 11736.902 - 11796.480: 98.9450% ( 3) 00:10:34.934 11796.480 - 11856.058: 98.9650% ( 3) 00:10:34.934 11856.058 - 11915.636: 98.9850% ( 3) 00:10:34.934 11915.636 - 11975.215: 98.9984% ( 2) 00:10:34.934 11975.215 - 12034.793: 99.0118% ( 2) 00:10:34.934 12034.793 - 12094.371: 99.0318% ( 3) 00:10:34.934 12094.371 - 12153.949: 99.0451% ( 2) 00:10:34.934 12153.949 - 12213.527: 99.0652% ( 3) 00:10:34.934 12213.527 - 12273.105: 99.0852% ( 3) 00:10:34.934 12273.105 - 12332.684: 99.1052% ( 3) 00:10:34.934 12332.684 - 12392.262: 99.1186% ( 2) 00:10:34.934 12392.262 - 12451.840: 99.1386% ( 3) 00:10:34.934 12451.840 - 12511.418: 99.1453% ( 1) 00:10:34.934 26571.869 - 26691.025: 99.1720% ( 4) 00:10:34.934 26691.025 - 26810.182: 99.1987% ( 4) 00:10:34.934 26810.182 - 26929.338: 99.2254% ( 4) 00:10:34.934 26929.338 - 27048.495: 99.2455% ( 3) 00:10:34.934 27048.495 - 27167.651: 99.2722% ( 4) 00:10:34.934 27167.651 - 27286.807: 99.3056% ( 5) 00:10:34.934 27286.807 - 27405.964: 99.3256% ( 3) 00:10:34.934 27405.964 - 27525.120: 99.3523% ( 4) 00:10:34.934 27525.120 - 27644.276: 99.3790% ( 4) 00:10:34.934 27644.276 - 27763.433: 99.4057% ( 4) 00:10:34.934 27763.433 - 27882.589: 99.4324% ( 4) 00:10:34.934 27882.589 - 28001.745: 99.4591% ( 4) 00:10:34.934 28001.745 - 28120.902: 99.4858% ( 4) 00:10:34.934 28120.902 - 28240.058: 99.5059% ( 3) 00:10:34.934 28240.058 - 28359.215: 99.5326% ( 4) 00:10:34.934 28359.215 - 28478.371: 99.5593% ( 4) 00:10:34.934 28478.371 - 28597.527: 99.5860% ( 4) 00:10:34.934 28597.527 - 28716.684: 99.6194% ( 5) 00:10:34.934 28716.684 - 28835.840: 99.6394% ( 3) 00:10:34.934 28835.840 - 28954.996: 99.6661% ( 4) 00:10:34.934 28954.996 - 29074.153: 99.6928% ( 4) 00:10:34.934 29074.153 - 29193.309: 99.7196% ( 4) 00:10:34.934 29193.309 - 29312.465: 99.7396% ( 3) 00:10:34.934 29312.465 - 29431.622: 99.7596% ( 3) 00:10:34.934 29431.622 - 29550.778: 99.7863% ( 4) 00:10:34.934 29550.778 - 29669.935: 99.8130% ( 4) 00:10:34.934 29669.935 - 29789.091: 99.8397% ( 4) 00:10:34.934 29789.091 - 29908.247: 99.8665% ( 4) 00:10:34.934 29908.247 - 30027.404: 99.8932% ( 4) 00:10:34.934 30027.404 - 30146.560: 99.9199% ( 4) 00:10:34.934 30146.560 - 30265.716: 99.9533% ( 5) 00:10:34.934 30265.716 - 30384.873: 99.9800% ( 4) 00:10:34.934 30384.873 - 30504.029: 100.0000% ( 3) 00:10:34.934 00:10:34.934 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:34.934 ============================================================================== 00:10:34.934 Range in us Cumulative IO count 00:10:34.934 6702.545 - 6732.335: 0.0067% ( 1) 00:10:34.934 6732.335 - 6762.124: 0.0134% ( 1) 00:10:34.934 6762.124 - 6791.913: 0.0267% ( 2) 00:10:34.934 6791.913 - 6821.702: 0.0401% ( 2) 00:10:34.934 6821.702 - 6851.491: 0.0534% ( 2) 00:10:34.934 6851.491 - 6881.280: 0.0801% ( 4) 00:10:34.934 6881.280 - 6911.069: 0.1202% ( 6) 00:10:34.934 6911.069 - 6940.858: 0.1335% ( 2) 00:10:34.934 6940.858 - 6970.647: 0.1936% ( 9) 00:10:34.934 6970.647 - 7000.436: 0.2604% ( 10) 00:10:34.934 7000.436 - 7030.225: 0.3205% ( 9) 00:10:34.934 7030.225 - 7060.015: 0.4407% ( 18) 00:10:34.934 7060.015 - 7089.804: 0.5342% ( 14) 00:10:34.934 7089.804 - 7119.593: 0.6811% ( 22) 00:10:34.934 7119.593 - 7149.382: 0.9081% ( 34) 00:10:34.934 7149.382 - 7179.171: 1.2286% ( 48) 00:10:34.934 7179.171 - 7208.960: 1.6026% ( 56) 00:10:34.934 7208.960 - 7238.749: 2.0700% ( 70) 00:10:34.934 7238.749 - 7268.538: 2.5841% ( 77) 00:10:34.934 7268.538 - 7298.327: 3.2853% ( 105) 00:10:34.934 7298.327 - 7328.116: 4.0131% ( 109) 00:10:34.934 7328.116 - 7357.905: 4.8210% ( 121) 00:10:34.934 7357.905 - 7387.695: 5.6691% ( 127) 00:10:34.934 7387.695 - 7417.484: 6.6173% ( 142) 00:10:34.934 7417.484 - 7447.273: 7.6522% ( 155) 00:10:34.934 7447.273 - 7477.062: 8.6672% ( 152) 00:10:34.934 7477.062 - 7506.851: 9.8024% ( 170) 00:10:34.934 7506.851 - 7536.640: 11.0110% ( 181) 00:10:34.934 7536.640 - 7566.429: 12.2930% ( 192) 00:10:34.934 7566.429 - 7596.218: 13.6351% ( 201) 00:10:34.934 7596.218 - 7626.007: 14.9907% ( 203) 00:10:34.934 7626.007 - 7685.585: 17.6215% ( 394) 00:10:34.934 7685.585 - 7745.164: 20.3259% ( 405) 00:10:34.934 7745.164 - 7804.742: 23.1103% ( 417) 00:10:34.934 7804.742 - 7864.320: 25.9282% ( 422) 00:10:34.934 7864.320 - 7923.898: 28.7593% ( 424) 00:10:34.934 7923.898 - 7983.476: 31.6440% ( 432) 00:10:34.934 7983.476 - 8043.055: 34.4952% ( 427) 00:10:34.934 8043.055 - 8102.633: 37.2863% ( 418) 00:10:34.934 8102.633 - 8162.211: 40.2778% ( 448) 00:10:34.934 8162.211 - 8221.789: 43.2091% ( 439) 00:10:34.934 8221.789 - 8281.367: 46.2006% ( 448) 00:10:34.934 8281.367 - 8340.945: 49.2054% ( 450) 00:10:34.934 8340.945 - 8400.524: 52.1968% ( 448) 00:10:34.934 8400.524 - 8460.102: 55.1950% ( 449) 00:10:34.934 8460.102 - 8519.680: 58.2532% ( 458) 00:10:34.934 8519.680 - 8579.258: 61.2046% ( 442) 00:10:34.934 8579.258 - 8638.836: 64.2228% ( 452) 00:10:34.934 8638.836 - 8698.415: 67.1541% ( 439) 00:10:34.934 8698.415 - 8757.993: 70.1122% ( 443) 00:10:34.934 8757.993 - 8817.571: 73.0435% ( 439) 00:10:34.934 8817.571 - 8877.149: 75.9148% ( 430) 00:10:34.934 8877.149 - 8936.727: 78.7193% ( 420) 00:10:34.934 8936.727 - 8996.305: 81.4570% ( 410) 00:10:34.934 8996.305 - 9055.884: 83.9877% ( 379) 00:10:34.934 9055.884 - 9115.462: 86.2513% ( 339) 00:10:34.934 9115.462 - 9175.040: 88.2679% ( 302) 00:10:34.934 9175.040 - 9234.618: 90.0908% ( 273) 00:10:34.934 9234.618 - 9294.196: 91.5198% ( 214) 00:10:34.934 9294.196 - 9353.775: 92.7618% ( 186) 00:10:34.934 9353.775 - 9413.353: 93.6298% ( 130) 00:10:34.934 9413.353 - 9472.931: 94.2975% ( 100) 00:10:34.934 9472.931 - 9532.509: 94.8718% ( 86) 00:10:34.934 9532.509 - 9592.087: 95.3860% ( 77) 00:10:34.934 9592.087 - 9651.665: 95.8267% ( 66) 00:10:34.934 9651.665 - 9711.244: 96.1739% ( 52) 00:10:34.934 9711.244 - 9770.822: 96.4610% ( 43) 00:10:34.934 9770.822 - 9830.400: 96.7014% ( 36) 00:10:34.934 9830.400 - 9889.978: 96.9151% ( 32) 00:10:34.934 9889.978 - 9949.556: 97.1087% ( 29) 00:10:34.934 9949.556 - 10009.135: 97.2957% ( 28) 00:10:34.934 10009.135 - 10068.713: 97.4893% ( 29) 00:10:34.934 10068.713 - 10128.291: 97.6429% ( 23) 00:10:34.934 10128.291 - 10187.869: 97.7564% ( 17) 00:10:34.934 10187.869 - 10247.447: 97.8699% ( 17) 00:10:34.934 10247.447 - 10307.025: 97.9701% ( 15) 00:10:34.934 10307.025 - 10366.604: 98.0235% ( 8) 00:10:34.934 10366.604 - 10426.182: 98.0569% ( 5) 00:10:34.934 10426.182 - 10485.760: 98.1036% ( 7) 00:10:34.934 10485.760 - 10545.338: 98.1437% ( 6) 00:10:34.934 10545.338 - 10604.916: 98.1771% ( 5) 00:10:34.934 10604.916 - 10664.495: 98.2238% ( 7) 00:10:34.934 10664.495 - 10724.073: 98.2572% ( 5) 00:10:34.935 10724.073 - 10783.651: 98.2973% ( 6) 00:10:34.935 10783.651 - 10843.229: 98.3373% ( 6) 00:10:34.935 10843.229 - 10902.807: 98.3774% ( 6) 00:10:34.935 10902.807 - 10962.385: 98.4175% ( 6) 00:10:34.935 10962.385 - 11021.964: 98.4509% ( 5) 00:10:34.935 11021.964 - 11081.542: 98.4776% ( 4) 00:10:34.935 11081.542 - 11141.120: 98.5176% ( 6) 00:10:34.935 11141.120 - 11200.698: 98.5577% ( 6) 00:10:34.935 11200.698 - 11260.276: 98.5911% ( 5) 00:10:34.935 11260.276 - 11319.855: 98.6178% ( 4) 00:10:34.935 11319.855 - 11379.433: 98.6378% ( 3) 00:10:34.935 11379.433 - 11439.011: 98.6579% ( 3) 00:10:34.935 11439.011 - 11498.589: 98.6712% ( 2) 00:10:34.935 11498.589 - 11558.167: 98.6912% ( 3) 00:10:34.935 11558.167 - 11617.745: 98.7113% ( 3) 00:10:34.935 11617.745 - 11677.324: 98.7313% ( 3) 00:10:34.935 11677.324 - 11736.902: 98.7513% ( 3) 00:10:34.935 11736.902 - 11796.480: 98.7647% ( 2) 00:10:34.935 11796.480 - 11856.058: 98.7847% ( 3) 00:10:34.935 11856.058 - 11915.636: 98.8048% ( 3) 00:10:34.935 11915.636 - 11975.215: 98.8248% ( 3) 00:10:34.935 11975.215 - 12034.793: 98.8448% ( 3) 00:10:34.935 12034.793 - 12094.371: 98.8582% ( 2) 00:10:34.935 12094.371 - 12153.949: 98.8782% ( 3) 00:10:34.935 12153.949 - 12213.527: 98.8982% ( 3) 00:10:34.935 12213.527 - 12273.105: 98.9183% ( 3) 00:10:34.935 12273.105 - 12332.684: 98.9383% ( 3) 00:10:34.935 12332.684 - 12392.262: 98.9517% ( 2) 00:10:34.935 12392.262 - 12451.840: 98.9650% ( 2) 00:10:34.935 12451.840 - 12511.418: 98.9784% ( 2) 00:10:34.935 12511.418 - 12570.996: 98.9984% ( 3) 00:10:34.935 12570.996 - 12630.575: 99.0184% ( 3) 00:10:34.935 12630.575 - 12690.153: 99.0385% ( 3) 00:10:34.935 12690.153 - 12749.731: 99.0518% ( 2) 00:10:34.935 12749.731 - 12809.309: 99.0718% ( 3) 00:10:34.935 12809.309 - 12868.887: 99.0852% ( 2) 00:10:34.935 12868.887 - 12928.465: 99.1119% ( 4) 00:10:34.935 12928.465 - 12988.044: 99.1386% ( 4) 00:10:34.935 12988.044 - 13047.622: 99.1453% ( 1) 00:10:34.935 24784.524 - 24903.680: 99.1587% ( 2) 00:10:34.935 24903.680 - 25022.836: 99.1787% ( 3) 00:10:34.935 25022.836 - 25141.993: 99.2054% ( 4) 00:10:34.935 25141.993 - 25261.149: 99.2254% ( 3) 00:10:34.935 25261.149 - 25380.305: 99.2521% ( 4) 00:10:34.935 25380.305 - 25499.462: 99.2788% ( 4) 00:10:34.935 25499.462 - 25618.618: 99.3056% ( 4) 00:10:34.935 25618.618 - 25737.775: 99.3389% ( 5) 00:10:34.935 25737.775 - 25856.931: 99.3590% ( 3) 00:10:34.935 25856.931 - 25976.087: 99.3857% ( 4) 00:10:34.935 25976.087 - 26095.244: 99.4057% ( 3) 00:10:34.935 26095.244 - 26214.400: 99.4324% ( 4) 00:10:34.935 26214.400 - 26333.556: 99.4591% ( 4) 00:10:34.935 26333.556 - 26452.713: 99.4858% ( 4) 00:10:34.935 26452.713 - 26571.869: 99.5126% ( 4) 00:10:34.935 26571.869 - 26691.025: 99.5393% ( 4) 00:10:34.935 26691.025 - 26810.182: 99.5660% ( 4) 00:10:34.935 26810.182 - 26929.338: 99.5927% ( 4) 00:10:34.935 26929.338 - 27048.495: 99.6127% ( 3) 00:10:34.935 27048.495 - 27167.651: 99.6461% ( 5) 00:10:34.935 27167.651 - 27286.807: 99.6728% ( 4) 00:10:34.935 27286.807 - 27405.964: 99.7062% ( 5) 00:10:34.935 27405.964 - 27525.120: 99.7329% ( 4) 00:10:34.935 27525.120 - 27644.276: 99.7596% ( 4) 00:10:34.935 27644.276 - 27763.433: 99.7863% ( 4) 00:10:34.935 27763.433 - 27882.589: 99.8064% ( 3) 00:10:34.935 27882.589 - 28001.745: 99.8331% ( 4) 00:10:34.935 28001.745 - 28120.902: 99.8531% ( 3) 00:10:34.935 28120.902 - 28240.058: 99.8798% ( 4) 00:10:34.935 28240.058 - 28359.215: 99.8998% ( 3) 00:10:34.935 28359.215 - 28478.371: 99.9265% ( 4) 00:10:34.935 28478.371 - 28597.527: 99.9533% ( 4) 00:10:34.935 28597.527 - 28716.684: 99.9800% ( 4) 00:10:34.935 28716.684 - 28835.840: 100.0000% ( 3) 00:10:34.935 00:10:34.935 19:11:12 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:36.313 Initializing NVMe Controllers 00:10:36.313 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:36.313 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:36.313 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:36.313 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:36.313 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:36.313 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:36.313 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:36.313 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:36.313 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:36.313 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:36.313 Initialization complete. Launching workers. 00:10:36.313 ======================================================== 00:10:36.313 Latency(us) 00:10:36.313 Device Information : IOPS MiB/s Average min max 00:10:36.313 PCIE (0000:00:06.0) NSID 1 from core 0: 10325.83 121.01 12389.50 9090.94 41554.59 00:10:36.313 PCIE (0000:00:07.0) NSID 1 from core 0: 10325.83 121.01 12375.45 9365.27 40727.79 00:10:36.313 PCIE (0000:00:09.0) NSID 1 from core 0: 10325.83 121.01 12360.42 9176.21 41404.09 00:10:36.313 PCIE (0000:00:08.0) NSID 1 from core 0: 10453.31 122.50 12195.41 8986.32 25767.26 00:10:36.313 PCIE (0000:00:08.0) NSID 2 from core 0: 10453.31 122.50 12181.77 9048.98 23842.97 00:10:36.313 PCIE (0000:00:08.0) NSID 3 from core 0: 10453.31 122.50 12167.84 9267.32 22454.47 00:10:36.313 ======================================================== 00:10:36.313 Total : 62337.41 730.52 12277.80 8986.32 41554.59 00:10:36.313 00:10:36.313 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:36.313 ================================================================================= 00:10:36.313 1.00000% : 9472.931us 00:10:36.313 10.00000% : 10247.447us 00:10:36.313 25.00000% : 11021.964us 00:10:36.313 50.00000% : 12034.793us 00:10:36.313 75.00000% : 13107.200us 00:10:36.313 90.00000% : 14179.607us 00:10:36.313 95.00000% : 14715.811us 00:10:36.313 98.00000% : 15728.640us 00:10:36.314 99.00000% : 37653.411us 00:10:36.314 99.50000% : 39798.225us 00:10:36.314 99.90000% : 41228.102us 00:10:36.314 99.99000% : 41704.727us 00:10:36.314 99.99900% : 41704.727us 00:10:36.314 99.99990% : 41704.727us 00:10:36.314 99.99999% : 41704.727us 00:10:36.314 00:10:36.314 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:36.314 ================================================================================= 00:10:36.314 1.00000% : 9711.244us 00:10:36.314 10.00000% : 10366.604us 00:10:36.314 25.00000% : 11081.542us 00:10:36.314 50.00000% : 11975.215us 00:10:36.314 75.00000% : 12988.044us 00:10:36.314 90.00000% : 14060.451us 00:10:36.314 95.00000% : 14537.076us 00:10:36.314 98.00000% : 15847.796us 00:10:36.314 99.00000% : 37176.785us 00:10:36.314 99.50000% : 39083.287us 00:10:36.314 99.90000% : 40513.164us 00:10:36.314 99.99000% : 40751.476us 00:10:36.314 99.99900% : 40751.476us 00:10:36.314 99.99990% : 40751.476us 00:10:36.314 99.99999% : 40751.476us 00:10:36.314 00:10:36.314 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:36.314 ================================================================================= 00:10:36.314 1.00000% : 9711.244us 00:10:36.314 10.00000% : 10366.604us 00:10:36.314 25.00000% : 11081.542us 00:10:36.314 50.00000% : 11915.636us 00:10:36.314 75.00000% : 12988.044us 00:10:36.314 90.00000% : 14000.873us 00:10:36.314 95.00000% : 14596.655us 00:10:36.314 98.00000% : 15966.953us 00:10:36.314 99.00000% : 37415.098us 00:10:36.314 99.50000% : 39559.913us 00:10:36.314 99.90000% : 41228.102us 00:10:36.314 99.99000% : 41466.415us 00:10:36.314 99.99900% : 41466.415us 00:10:36.314 99.99990% : 41466.415us 00:10:36.314 99.99999% : 41466.415us 00:10:36.314 00:10:36.314 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:36.314 ================================================================================= 00:10:36.314 1.00000% : 9651.665us 00:10:36.314 10.00000% : 10426.182us 00:10:36.314 25.00000% : 11081.542us 00:10:36.314 50.00000% : 11975.215us 00:10:36.314 75.00000% : 13047.622us 00:10:36.314 90.00000% : 14060.451us 00:10:36.314 95.00000% : 14596.655us 00:10:36.314 98.00000% : 15371.171us 00:10:36.314 99.00000% : 22282.240us 00:10:36.314 99.50000% : 24069.585us 00:10:36.314 99.90000% : 25499.462us 00:10:36.314 99.99000% : 25737.775us 00:10:36.314 99.99900% : 25856.931us 00:10:36.314 99.99990% : 25856.931us 00:10:36.314 99.99999% : 25856.931us 00:10:36.314 00:10:36.314 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:36.314 ================================================================================= 00:10:36.314 1.00000% : 9592.087us 00:10:36.314 10.00000% : 10366.604us 00:10:36.314 25.00000% : 11141.120us 00:10:36.314 50.00000% : 11975.215us 00:10:36.314 75.00000% : 13047.622us 00:10:36.314 90.00000% : 14120.029us 00:10:36.314 95.00000% : 14656.233us 00:10:36.314 98.00000% : 15609.484us 00:10:36.314 99.00000% : 20256.582us 00:10:36.314 99.50000% : 22043.927us 00:10:36.314 99.90000% : 23592.960us 00:10:36.314 99.99000% : 23831.273us 00:10:36.314 99.99900% : 23950.429us 00:10:36.314 99.99990% : 23950.429us 00:10:36.314 99.99999% : 23950.429us 00:10:36.314 00:10:36.314 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:36.314 ================================================================================= 00:10:36.314 1.00000% : 9651.665us 00:10:36.314 10.00000% : 10366.604us 00:10:36.314 25.00000% : 11081.542us 00:10:36.314 50.00000% : 12034.793us 00:10:36.314 75.00000% : 13107.200us 00:10:36.314 90.00000% : 14060.451us 00:10:36.314 95.00000% : 14537.076us 00:10:36.314 98.00000% : 15609.484us 00:10:36.314 99.00000% : 18945.862us 00:10:36.314 99.50000% : 20733.207us 00:10:36.314 99.90000% : 22163.084us 00:10:36.314 99.99000% : 22520.553us 00:10:36.314 99.99900% : 22520.553us 00:10:36.314 99.99990% : 22520.553us 00:10:36.314 99.99999% : 22520.553us 00:10:36.314 00:10:36.314 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:36.314 ============================================================================== 00:10:36.314 Range in us Cumulative IO count 00:10:36.314 9055.884 - 9115.462: 0.0579% ( 6) 00:10:36.314 9115.462 - 9175.040: 0.1833% ( 13) 00:10:36.314 9175.040 - 9234.618: 0.2508% ( 7) 00:10:36.314 9234.618 - 9294.196: 0.4340% ( 19) 00:10:36.314 9294.196 - 9353.775: 0.6655% ( 24) 00:10:36.314 9353.775 - 9413.353: 0.9742% ( 32) 00:10:36.314 9413.353 - 9472.931: 1.3021% ( 34) 00:10:36.314 9472.931 - 9532.509: 1.7747% ( 49) 00:10:36.314 9532.509 - 9592.087: 2.3438% ( 59) 00:10:36.314 9592.087 - 9651.665: 2.8742% ( 55) 00:10:36.314 9651.665 - 9711.244: 3.5783% ( 73) 00:10:36.314 9711.244 - 9770.822: 4.2728% ( 72) 00:10:36.314 9770.822 - 9830.400: 4.8515% ( 60) 00:10:36.314 9830.400 - 9889.978: 5.3916% ( 56) 00:10:36.314 9889.978 - 9949.556: 6.0475% ( 68) 00:10:36.314 9949.556 - 10009.135: 6.8287% ( 81) 00:10:36.314 10009.135 - 10068.713: 7.6871% ( 89) 00:10:36.314 10068.713 - 10128.291: 8.4780% ( 82) 00:10:36.314 10128.291 - 10187.869: 9.3750% ( 93) 00:10:36.314 10187.869 - 10247.447: 10.3009% ( 96) 00:10:36.314 10247.447 - 10307.025: 11.3619% ( 110) 00:10:36.314 10307.025 - 10366.604: 12.4228% ( 110) 00:10:36.314 10366.604 - 10426.182: 13.3584% ( 97) 00:10:36.314 10426.182 - 10485.760: 14.4965% ( 118) 00:10:36.314 10485.760 - 10545.338: 15.5575% ( 110) 00:10:36.314 10545.338 - 10604.916: 16.6281% ( 111) 00:10:36.314 10604.916 - 10664.495: 17.6601% ( 107) 00:10:36.314 10664.495 - 10724.073: 18.8754% ( 126) 00:10:36.314 10724.073 - 10783.651: 20.2064% ( 138) 00:10:36.314 10783.651 - 10843.229: 21.3349% ( 117) 00:10:36.314 10843.229 - 10902.807: 22.7816% ( 150) 00:10:36.314 10902.807 - 10962.385: 24.3248% ( 160) 00:10:36.314 10962.385 - 11021.964: 25.6462% ( 137) 00:10:36.314 11021.964 - 11081.542: 26.9579% ( 136) 00:10:36.314 11081.542 - 11141.120: 28.4433% ( 154) 00:10:36.314 11141.120 - 11200.698: 29.7936% ( 140) 00:10:36.314 11200.698 - 11260.276: 31.1728% ( 143) 00:10:36.314 11260.276 - 11319.855: 32.6003% ( 148) 00:10:36.314 11319.855 - 11379.433: 33.9506% ( 140) 00:10:36.314 11379.433 - 11439.011: 35.4456% ( 155) 00:10:36.314 11439.011 - 11498.589: 36.8345% ( 144) 00:10:36.314 11498.589 - 11558.167: 38.3681% ( 159) 00:10:36.314 11558.167 - 11617.745: 39.8630% ( 155) 00:10:36.314 11617.745 - 11677.324: 41.4352% ( 163) 00:10:36.314 11677.324 - 11736.902: 42.9302% ( 155) 00:10:36.314 11736.902 - 11796.480: 44.6084% ( 174) 00:10:36.314 11796.480 - 11856.058: 46.1227% ( 157) 00:10:36.314 11856.058 - 11915.636: 47.6466% ( 158) 00:10:36.314 11915.636 - 11975.215: 49.2284% ( 164) 00:10:36.314 11975.215 - 12034.793: 50.8391% ( 167) 00:10:36.314 12034.793 - 12094.371: 52.4981% ( 172) 00:10:36.314 12094.371 - 12153.949: 53.9641% ( 152) 00:10:36.314 12153.949 - 12213.527: 55.6327% ( 173) 00:10:36.314 12213.527 - 12273.105: 57.1759% ( 160) 00:10:36.314 12273.105 - 12332.684: 58.7867% ( 167) 00:10:36.314 12332.684 - 12392.262: 60.3395% ( 161) 00:10:36.314 12392.262 - 12451.840: 61.6898% ( 140) 00:10:36.314 12451.840 - 12511.418: 63.1269% ( 149) 00:10:36.314 12511.418 - 12570.996: 64.6412% ( 157) 00:10:36.314 12570.996 - 12630.575: 66.0108% ( 142) 00:10:36.314 12630.575 - 12690.153: 67.3900% ( 143) 00:10:36.314 12690.153 - 12749.731: 68.6825% ( 134) 00:10:36.314 12749.731 - 12809.309: 69.8688% ( 123) 00:10:36.314 12809.309 - 12868.887: 71.0648% ( 124) 00:10:36.314 12868.887 - 12928.465: 72.3765% ( 136) 00:10:36.314 12928.465 - 12988.044: 73.4086% ( 107) 00:10:36.314 12988.044 - 13047.622: 74.4985% ( 113) 00:10:36.314 13047.622 - 13107.200: 75.5208% ( 106) 00:10:36.314 13107.200 - 13166.778: 76.6879% ( 121) 00:10:36.314 13166.778 - 13226.356: 77.6427% ( 99) 00:10:36.314 13226.356 - 13285.935: 78.7037% ( 110) 00:10:36.314 13285.935 - 13345.513: 79.8032% ( 114) 00:10:36.314 13345.513 - 13405.091: 80.8256% ( 106) 00:10:36.314 13405.091 - 13464.669: 81.7419% ( 95) 00:10:36.314 13464.669 - 13524.247: 82.7064% ( 100) 00:10:36.314 13524.247 - 13583.825: 83.5552% ( 88) 00:10:36.314 13583.825 - 13643.404: 84.3075% ( 78) 00:10:36.314 13643.404 - 13702.982: 85.0309% ( 75) 00:10:36.314 13702.982 - 13762.560: 85.7832% ( 78) 00:10:36.314 13762.560 - 13822.138: 86.4487% ( 69) 00:10:36.314 13822.138 - 13881.716: 87.1238% ( 70) 00:10:36.314 13881.716 - 13941.295: 87.7604% ( 66) 00:10:36.314 13941.295 - 14000.873: 88.3777% ( 64) 00:10:36.314 14000.873 - 14060.451: 89.0336% ( 68) 00:10:36.314 14060.451 - 14120.029: 89.6701% ( 66) 00:10:36.314 14120.029 - 14179.607: 90.3453% ( 70) 00:10:36.314 14179.607 - 14239.185: 91.0012% ( 68) 00:10:36.314 14239.185 - 14298.764: 91.6474% ( 67) 00:10:36.314 14298.764 - 14358.342: 92.2743% ( 65) 00:10:36.314 14358.342 - 14417.920: 92.8241% ( 57) 00:10:36.314 14417.920 - 14477.498: 93.3835% ( 58) 00:10:36.314 14477.498 - 14537.076: 93.8657% ( 50) 00:10:36.314 14537.076 - 14596.655: 94.3673% ( 52) 00:10:36.314 14596.655 - 14656.233: 94.8013% ( 45) 00:10:36.314 14656.233 - 14715.811: 95.2160% ( 43) 00:10:36.314 14715.811 - 14775.389: 95.5922% ( 39) 00:10:36.314 14775.389 - 14834.967: 95.9491% ( 37) 00:10:36.314 14834.967 - 14894.545: 96.2481% ( 31) 00:10:36.314 14894.545 - 14954.124: 96.4892% ( 25) 00:10:36.314 14954.124 - 15013.702: 96.7400% ( 26) 00:10:36.314 15013.702 - 15073.280: 96.9039% ( 17) 00:10:36.314 15073.280 - 15132.858: 97.0872% ( 19) 00:10:36.314 15132.858 - 15192.436: 97.2029% ( 12) 00:10:36.314 15192.436 - 15252.015: 97.3380% ( 14) 00:10:36.314 15252.015 - 15371.171: 97.5502% ( 22) 00:10:36.314 15371.171 - 15490.327: 97.7623% ( 22) 00:10:36.315 15490.327 - 15609.484: 97.9842% ( 23) 00:10:36.315 15609.484 - 15728.640: 98.1481% ( 17) 00:10:36.315 15728.640 - 15847.796: 98.2639% ( 12) 00:10:36.315 15847.796 - 15966.953: 98.3700% ( 11) 00:10:36.315 15966.953 - 16086.109: 98.4568% ( 9) 00:10:36.315 16086.109 - 16205.265: 98.5725% ( 12) 00:10:36.315 16205.265 - 16324.422: 98.6690% ( 10) 00:10:36.315 16324.422 - 16443.578: 98.7269% ( 6) 00:10:36.315 16443.578 - 16562.735: 98.7654% ( 4) 00:10:36.315 36461.847 - 36700.160: 98.7751% ( 1) 00:10:36.315 36700.160 - 36938.473: 98.8329% ( 6) 00:10:36.315 36938.473 - 37176.785: 98.9005% ( 7) 00:10:36.315 37176.785 - 37415.098: 98.9583% ( 6) 00:10:36.315 37415.098 - 37653.411: 99.0162% ( 6) 00:10:36.315 37653.411 - 37891.724: 99.0741% ( 6) 00:10:36.315 37891.724 - 38130.036: 99.1319% ( 6) 00:10:36.315 38130.036 - 38368.349: 99.1898% ( 6) 00:10:36.315 38368.349 - 38606.662: 99.2477% ( 6) 00:10:36.315 38606.662 - 38844.975: 99.3056% ( 6) 00:10:36.315 38844.975 - 39083.287: 99.3634% ( 6) 00:10:36.315 39083.287 - 39321.600: 99.4309% ( 7) 00:10:36.315 39321.600 - 39559.913: 99.4888% ( 6) 00:10:36.315 39559.913 - 39798.225: 99.5370% ( 5) 00:10:36.315 39798.225 - 40036.538: 99.6046% ( 7) 00:10:36.315 40036.538 - 40274.851: 99.6624% ( 6) 00:10:36.315 40274.851 - 40513.164: 99.7299% ( 7) 00:10:36.315 40513.164 - 40751.476: 99.7878% ( 6) 00:10:36.315 40751.476 - 40989.789: 99.8457% ( 6) 00:10:36.315 40989.789 - 41228.102: 99.9228% ( 8) 00:10:36.315 41228.102 - 41466.415: 99.9711% ( 5) 00:10:36.315 41466.415 - 41704.727: 100.0000% ( 3) 00:10:36.315 00:10:36.315 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:36.315 ============================================================================== 00:10:36.315 Range in us Cumulative IO count 00:10:36.315 9353.775 - 9413.353: 0.0965% ( 10) 00:10:36.315 9413.353 - 9472.931: 0.1929% ( 10) 00:10:36.315 9472.931 - 9532.509: 0.3762% ( 19) 00:10:36.315 9532.509 - 9592.087: 0.5787% ( 21) 00:10:36.315 9592.087 - 9651.665: 0.8777% ( 31) 00:10:36.315 9651.665 - 9711.244: 1.2828% ( 42) 00:10:36.315 9711.244 - 9770.822: 1.6879% ( 42) 00:10:36.315 9770.822 - 9830.400: 2.2087% ( 54) 00:10:36.315 9830.400 - 9889.978: 3.0189% ( 84) 00:10:36.315 9889.978 - 9949.556: 3.9738% ( 99) 00:10:36.315 9949.556 - 10009.135: 4.8418% ( 90) 00:10:36.315 10009.135 - 10068.713: 5.6617% ( 85) 00:10:36.315 10068.713 - 10128.291: 6.5683% ( 94) 00:10:36.315 10128.291 - 10187.869: 7.3495% ( 81) 00:10:36.315 10187.869 - 10247.447: 8.2658% ( 95) 00:10:36.315 10247.447 - 10307.025: 9.2207% ( 99) 00:10:36.315 10307.025 - 10366.604: 10.1562% ( 97) 00:10:36.315 10366.604 - 10426.182: 11.0436% ( 92) 00:10:36.315 10426.182 - 10485.760: 12.0370% ( 103) 00:10:36.315 10485.760 - 10545.338: 13.1944% ( 120) 00:10:36.315 10545.338 - 10604.916: 14.3229% ( 117) 00:10:36.315 10604.916 - 10664.495: 15.4996% ( 122) 00:10:36.315 10664.495 - 10724.073: 16.7438% ( 129) 00:10:36.315 10724.073 - 10783.651: 17.9880% ( 129) 00:10:36.315 10783.651 - 10843.229: 19.3094% ( 137) 00:10:36.315 10843.229 - 10902.807: 20.6019% ( 134) 00:10:36.315 10902.807 - 10962.385: 21.8943% ( 134) 00:10:36.315 10962.385 - 11021.964: 23.3314% ( 149) 00:10:36.315 11021.964 - 11081.542: 25.0482% ( 178) 00:10:36.315 11081.542 - 11141.120: 26.6879% ( 170) 00:10:36.315 11141.120 - 11200.698: 28.3661% ( 174) 00:10:36.315 11200.698 - 11260.276: 30.0251% ( 172) 00:10:36.315 11260.276 - 11319.855: 31.7323% ( 177) 00:10:36.315 11319.855 - 11379.433: 33.4201% ( 175) 00:10:36.315 11379.433 - 11439.011: 35.0791% ( 172) 00:10:36.315 11439.011 - 11498.589: 36.8345% ( 182) 00:10:36.315 11498.589 - 11558.167: 38.5417% ( 177) 00:10:36.315 11558.167 - 11617.745: 40.3260% ( 185) 00:10:36.315 11617.745 - 11677.324: 42.0428% ( 178) 00:10:36.315 11677.324 - 11736.902: 43.7211% ( 174) 00:10:36.315 11736.902 - 11796.480: 45.4668% ( 181) 00:10:36.315 11796.480 - 11856.058: 47.1065% ( 170) 00:10:36.315 11856.058 - 11915.636: 48.9873% ( 195) 00:10:36.315 11915.636 - 11975.215: 50.6269% ( 170) 00:10:36.315 11975.215 - 12034.793: 52.2473% ( 168) 00:10:36.315 12034.793 - 12094.371: 53.8194% ( 163) 00:10:36.315 12094.371 - 12153.949: 55.4302% ( 167) 00:10:36.315 12153.949 - 12213.527: 56.9155% ( 154) 00:10:36.315 12213.527 - 12273.105: 58.3816% ( 152) 00:10:36.315 12273.105 - 12332.684: 59.8958% ( 157) 00:10:36.315 12332.684 - 12392.262: 61.4005% ( 156) 00:10:36.315 12392.262 - 12451.840: 63.1559% ( 182) 00:10:36.315 12451.840 - 12511.418: 64.6991% ( 160) 00:10:36.315 12511.418 - 12570.996: 66.1844% ( 154) 00:10:36.315 12570.996 - 12630.575: 67.6022% ( 147) 00:10:36.315 12630.575 - 12690.153: 68.9622% ( 141) 00:10:36.315 12690.153 - 12749.731: 70.2932% ( 138) 00:10:36.315 12749.731 - 12809.309: 71.6049% ( 136) 00:10:36.315 12809.309 - 12868.887: 73.0035% ( 145) 00:10:36.315 12868.887 - 12928.465: 74.1995% ( 124) 00:10:36.315 12928.465 - 12988.044: 75.4147% ( 126) 00:10:36.315 12988.044 - 13047.622: 76.5818% ( 121) 00:10:36.315 13047.622 - 13107.200: 77.6813% ( 114) 00:10:36.315 13107.200 - 13166.778: 78.7809% ( 114) 00:10:36.315 13166.778 - 13226.356: 79.7550% ( 101) 00:10:36.315 13226.356 - 13285.935: 80.7292% ( 101) 00:10:36.315 13285.935 - 13345.513: 81.6551% ( 96) 00:10:36.315 13345.513 - 13405.091: 82.4749% ( 85) 00:10:36.315 13405.091 - 13464.669: 83.3430% ( 90) 00:10:36.315 13464.669 - 13524.247: 84.0953% ( 78) 00:10:36.315 13524.247 - 13583.825: 84.8476% ( 78) 00:10:36.315 13583.825 - 13643.404: 85.5806% ( 76) 00:10:36.315 13643.404 - 13702.982: 86.3137% ( 76) 00:10:36.315 13702.982 - 13762.560: 87.1142% ( 83) 00:10:36.315 13762.560 - 13822.138: 87.8665% ( 78) 00:10:36.315 13822.138 - 13881.716: 88.5802% ( 74) 00:10:36.315 13881.716 - 13941.295: 89.2843% ( 73) 00:10:36.315 13941.295 - 14000.873: 89.9498% ( 69) 00:10:36.315 14000.873 - 14060.451: 90.6636% ( 74) 00:10:36.315 14060.451 - 14120.029: 91.3484% ( 71) 00:10:36.315 14120.029 - 14179.607: 92.0235% ( 70) 00:10:36.315 14179.607 - 14239.185: 92.6312% ( 63) 00:10:36.315 14239.185 - 14298.764: 93.2292% ( 62) 00:10:36.315 14298.764 - 14358.342: 93.7693% ( 56) 00:10:36.315 14358.342 - 14417.920: 94.3576% ( 61) 00:10:36.315 14417.920 - 14477.498: 94.8592% ( 52) 00:10:36.315 14477.498 - 14537.076: 95.3125% ( 47) 00:10:36.315 14537.076 - 14596.655: 95.6501% ( 35) 00:10:36.315 14596.655 - 14656.233: 95.8816% ( 24) 00:10:36.315 14656.233 - 14715.811: 96.1323% ( 26) 00:10:36.315 14715.811 - 14775.389: 96.3445% ( 22) 00:10:36.315 14775.389 - 14834.967: 96.5471% ( 21) 00:10:36.315 14834.967 - 14894.545: 96.7110% ( 17) 00:10:36.315 14894.545 - 14954.124: 96.8557% ( 15) 00:10:36.315 14954.124 - 15013.702: 97.0100% ( 16) 00:10:36.315 15013.702 - 15073.280: 97.1258% ( 12) 00:10:36.315 15073.280 - 15132.858: 97.2415% ( 12) 00:10:36.315 15132.858 - 15192.436: 97.3283% ( 9) 00:10:36.315 15192.436 - 15252.015: 97.4151% ( 9) 00:10:36.315 15252.015 - 15371.171: 97.5887% ( 18) 00:10:36.315 15371.171 - 15490.327: 97.7334% ( 15) 00:10:36.315 15490.327 - 15609.484: 97.8588% ( 13) 00:10:36.315 15609.484 - 15728.640: 97.9745% ( 12) 00:10:36.315 15728.640 - 15847.796: 98.0806% ( 11) 00:10:36.315 15847.796 - 15966.953: 98.2060% ( 13) 00:10:36.315 15966.953 - 16086.109: 98.3218% ( 12) 00:10:36.315 16086.109 - 16205.265: 98.4375% ( 12) 00:10:36.315 16205.265 - 16324.422: 98.5050% ( 7) 00:10:36.315 16324.422 - 16443.578: 98.5629% ( 6) 00:10:36.315 16443.578 - 16562.735: 98.6304% ( 7) 00:10:36.315 16562.735 - 16681.891: 98.6883% ( 6) 00:10:36.315 16681.891 - 16801.047: 98.7558% ( 7) 00:10:36.315 16801.047 - 16920.204: 98.7654% ( 1) 00:10:36.315 35985.222 - 36223.535: 98.7847% ( 2) 00:10:36.315 36223.535 - 36461.847: 98.8522% ( 7) 00:10:36.315 36461.847 - 36700.160: 98.9198% ( 7) 00:10:36.315 36700.160 - 36938.473: 98.9776% ( 6) 00:10:36.315 36938.473 - 37176.785: 99.0451% ( 7) 00:10:36.315 37176.785 - 37415.098: 99.1030% ( 6) 00:10:36.315 37415.098 - 37653.411: 99.1705% ( 7) 00:10:36.315 37653.411 - 37891.724: 99.2284% ( 6) 00:10:36.315 37891.724 - 38130.036: 99.2959% ( 7) 00:10:36.315 38130.036 - 38368.349: 99.3634% ( 7) 00:10:36.315 38368.349 - 38606.662: 99.4213% ( 6) 00:10:36.315 38606.662 - 38844.975: 99.4792% ( 6) 00:10:36.315 38844.975 - 39083.287: 99.5467% ( 7) 00:10:36.315 39083.287 - 39321.600: 99.6046% ( 6) 00:10:36.315 39321.600 - 39559.913: 99.6721% ( 7) 00:10:36.315 39559.913 - 39798.225: 99.7299% ( 6) 00:10:36.315 39798.225 - 40036.538: 99.7975% ( 7) 00:10:36.315 40036.538 - 40274.851: 99.8650% ( 7) 00:10:36.315 40274.851 - 40513.164: 99.9325% ( 7) 00:10:36.315 40513.164 - 40751.476: 100.0000% ( 7) 00:10:36.315 00:10:36.315 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:36.315 ============================================================================== 00:10:36.315 Range in us Cumulative IO count 00:10:36.315 9175.040 - 9234.618: 0.0675% ( 7) 00:10:36.315 9234.618 - 9294.196: 0.1254% ( 6) 00:10:36.315 9294.196 - 9353.775: 0.1929% ( 7) 00:10:36.315 9353.775 - 9413.353: 0.2218% ( 3) 00:10:36.315 9413.353 - 9472.931: 0.3665% ( 15) 00:10:36.315 9472.931 - 9532.509: 0.5208% ( 16) 00:10:36.315 9532.509 - 9592.087: 0.6944% ( 18) 00:10:36.315 9592.087 - 9651.665: 0.9356% ( 25) 00:10:36.315 9651.665 - 9711.244: 1.2056% ( 28) 00:10:36.315 9711.244 - 9770.822: 1.6686% ( 48) 00:10:36.315 9770.822 - 9830.400: 2.1798% ( 53) 00:10:36.316 9830.400 - 9889.978: 2.8935% ( 74) 00:10:36.316 9889.978 - 9949.556: 3.7712% ( 91) 00:10:36.316 9949.556 - 10009.135: 4.7164% ( 98) 00:10:36.316 10009.135 - 10068.713: 5.8063% ( 113) 00:10:36.316 10068.713 - 10128.291: 6.7515% ( 98) 00:10:36.316 10128.291 - 10187.869: 7.5714% ( 85) 00:10:36.316 10187.869 - 10247.447: 8.4201% ( 88) 00:10:36.316 10247.447 - 10307.025: 9.4618% ( 108) 00:10:36.316 10307.025 - 10366.604: 10.4167% ( 99) 00:10:36.316 10366.604 - 10426.182: 11.3812% ( 100) 00:10:36.316 10426.182 - 10485.760: 12.3071% ( 96) 00:10:36.316 10485.760 - 10545.338: 13.2620% ( 99) 00:10:36.316 10545.338 - 10604.916: 14.3904% ( 117) 00:10:36.316 10604.916 - 10664.495: 15.5575% ( 121) 00:10:36.316 10664.495 - 10724.073: 16.8113% ( 130) 00:10:36.316 10724.073 - 10783.651: 18.1424% ( 138) 00:10:36.316 10783.651 - 10843.229: 19.5698% ( 148) 00:10:36.316 10843.229 - 10902.807: 21.0455% ( 153) 00:10:36.316 10902.807 - 10962.385: 22.5309% ( 154) 00:10:36.316 10962.385 - 11021.964: 24.0837% ( 161) 00:10:36.316 11021.964 - 11081.542: 25.6655% ( 164) 00:10:36.316 11081.542 - 11141.120: 27.1991% ( 159) 00:10:36.316 11141.120 - 11200.698: 28.9062% ( 177) 00:10:36.316 11200.698 - 11260.276: 30.6617% ( 182) 00:10:36.316 11260.276 - 11319.855: 32.3881% ( 179) 00:10:36.316 11319.855 - 11379.433: 34.1628% ( 184) 00:10:36.316 11379.433 - 11439.011: 35.8700% ( 177) 00:10:36.316 11439.011 - 11498.589: 37.5868% ( 178) 00:10:36.316 11498.589 - 11558.167: 39.2747% ( 175) 00:10:36.316 11558.167 - 11617.745: 41.0204% ( 181) 00:10:36.316 11617.745 - 11677.324: 42.7083% ( 175) 00:10:36.316 11677.324 - 11736.902: 44.6470% ( 201) 00:10:36.316 11736.902 - 11796.480: 46.6532% ( 208) 00:10:36.316 11796.480 - 11856.058: 48.5822% ( 200) 00:10:36.316 11856.058 - 11915.636: 50.4244% ( 191) 00:10:36.316 11915.636 - 11975.215: 52.1798% ( 182) 00:10:36.316 11975.215 - 12034.793: 53.7809% ( 166) 00:10:36.316 12034.793 - 12094.371: 55.4205% ( 170) 00:10:36.316 12094.371 - 12153.949: 57.0698% ( 171) 00:10:36.316 12153.949 - 12213.527: 58.7384% ( 173) 00:10:36.316 12213.527 - 12273.105: 60.2720% ( 159) 00:10:36.316 12273.105 - 12332.684: 61.6416% ( 142) 00:10:36.316 12332.684 - 12392.262: 63.0980% ( 151) 00:10:36.316 12392.262 - 12451.840: 64.4483% ( 140) 00:10:36.316 12451.840 - 12511.418: 65.7890% ( 139) 00:10:36.316 12511.418 - 12570.996: 67.1007% ( 136) 00:10:36.316 12570.996 - 12630.575: 68.3835% ( 133) 00:10:36.316 12630.575 - 12690.153: 69.6084% ( 127) 00:10:36.316 12690.153 - 12749.731: 70.8430% ( 128) 00:10:36.316 12749.731 - 12809.309: 72.0100% ( 121) 00:10:36.316 12809.309 - 12868.887: 73.1771% ( 121) 00:10:36.316 12868.887 - 12928.465: 74.3538% ( 122) 00:10:36.316 12928.465 - 12988.044: 75.6269% ( 132) 00:10:36.316 12988.044 - 13047.622: 76.7940% ( 121) 00:10:36.316 13047.622 - 13107.200: 77.9610% ( 121) 00:10:36.316 13107.200 - 13166.778: 79.1281% ( 121) 00:10:36.316 13166.778 - 13226.356: 80.1312% ( 104) 00:10:36.316 13226.356 - 13285.935: 81.0764% ( 98) 00:10:36.316 13285.935 - 13345.513: 81.9927% ( 95) 00:10:36.316 13345.513 - 13405.091: 82.8897% ( 93) 00:10:36.316 13405.091 - 13464.669: 83.7481% ( 89) 00:10:36.316 13464.669 - 13524.247: 84.5872% ( 87) 00:10:36.316 13524.247 - 13583.825: 85.4070% ( 85) 00:10:36.316 13583.825 - 13643.404: 86.1400% ( 76) 00:10:36.316 13643.404 - 13702.982: 86.8924% ( 78) 00:10:36.316 13702.982 - 13762.560: 87.6929% ( 83) 00:10:36.316 13762.560 - 13822.138: 88.3970% ( 73) 00:10:36.316 13822.138 - 13881.716: 89.0721% ( 70) 00:10:36.316 13881.716 - 13941.295: 89.8052% ( 76) 00:10:36.316 13941.295 - 14000.873: 90.4417% ( 66) 00:10:36.316 14000.873 - 14060.451: 91.1073% ( 69) 00:10:36.316 14060.451 - 14120.029: 91.6956% ( 61) 00:10:36.316 14120.029 - 14179.607: 92.2454% ( 57) 00:10:36.316 14179.607 - 14239.185: 92.7855% ( 56) 00:10:36.316 14239.185 - 14298.764: 93.2485% ( 48) 00:10:36.316 14298.764 - 14358.342: 93.6343% ( 40) 00:10:36.316 14358.342 - 14417.920: 94.0297% ( 41) 00:10:36.316 14417.920 - 14477.498: 94.4444% ( 43) 00:10:36.316 14477.498 - 14537.076: 94.8110% ( 38) 00:10:36.316 14537.076 - 14596.655: 95.1100% ( 31) 00:10:36.316 14596.655 - 14656.233: 95.3511% ( 25) 00:10:36.316 14656.233 - 14715.811: 95.5633% ( 22) 00:10:36.316 14715.811 - 14775.389: 95.7851% ( 23) 00:10:36.316 14775.389 - 14834.967: 95.9491% ( 17) 00:10:36.316 14834.967 - 14894.545: 96.1902% ( 25) 00:10:36.316 14894.545 - 14954.124: 96.3927% ( 21) 00:10:36.316 14954.124 - 15013.702: 96.6146% ( 23) 00:10:36.316 15013.702 - 15073.280: 96.7689% ( 16) 00:10:36.316 15073.280 - 15132.858: 96.9232% ( 16) 00:10:36.316 15132.858 - 15192.436: 97.0486% ( 13) 00:10:36.316 15192.436 - 15252.015: 97.1644% ( 12) 00:10:36.316 15252.015 - 15371.171: 97.3862% ( 23) 00:10:36.316 15371.171 - 15490.327: 97.5502% ( 17) 00:10:36.316 15490.327 - 15609.484: 97.6755% ( 13) 00:10:36.316 15609.484 - 15728.640: 97.8009% ( 13) 00:10:36.316 15728.640 - 15847.796: 97.8974% ( 10) 00:10:36.316 15847.796 - 15966.953: 98.0228% ( 13) 00:10:36.316 15966.953 - 16086.109: 98.1289% ( 11) 00:10:36.316 16086.109 - 16205.265: 98.2446% ( 12) 00:10:36.316 16205.265 - 16324.422: 98.3121% ( 7) 00:10:36.316 16324.422 - 16443.578: 98.3796% ( 7) 00:10:36.316 16443.578 - 16562.735: 98.4375% ( 6) 00:10:36.316 16562.735 - 16681.891: 98.4857% ( 5) 00:10:36.316 16681.891 - 16801.047: 98.5340% ( 5) 00:10:36.316 16801.047 - 16920.204: 98.6015% ( 7) 00:10:36.316 16920.204 - 17039.360: 98.6593% ( 6) 00:10:36.316 17039.360 - 17158.516: 98.7076% ( 5) 00:10:36.316 17158.516 - 17277.673: 98.7558% ( 5) 00:10:36.316 17277.673 - 17396.829: 98.7654% ( 1) 00:10:36.316 35985.222 - 36223.535: 98.7751% ( 1) 00:10:36.316 36223.535 - 36461.847: 98.8329% ( 6) 00:10:36.316 36461.847 - 36700.160: 98.8908% ( 6) 00:10:36.316 36700.160 - 36938.473: 98.9390% ( 5) 00:10:36.316 36938.473 - 37176.785: 98.9873% ( 5) 00:10:36.316 37176.785 - 37415.098: 99.0451% ( 6) 00:10:36.316 37415.098 - 37653.411: 99.1030% ( 6) 00:10:36.316 37653.411 - 37891.724: 99.1609% ( 6) 00:10:36.316 37891.724 - 38130.036: 99.2091% ( 5) 00:10:36.316 38130.036 - 38368.349: 99.2670% ( 6) 00:10:36.316 38368.349 - 38606.662: 99.3248% ( 6) 00:10:36.316 38606.662 - 38844.975: 99.3827% ( 6) 00:10:36.316 38844.975 - 39083.287: 99.4406% ( 6) 00:10:36.316 39083.287 - 39321.600: 99.4888% ( 5) 00:10:36.316 39321.600 - 39559.913: 99.5467% ( 6) 00:10:36.316 39559.913 - 39798.225: 99.5949% ( 5) 00:10:36.316 39798.225 - 40036.538: 99.6528% ( 6) 00:10:36.316 40036.538 - 40274.851: 99.7106% ( 6) 00:10:36.316 40274.851 - 40513.164: 99.7685% ( 6) 00:10:36.316 40513.164 - 40751.476: 99.8264% ( 6) 00:10:36.316 40751.476 - 40989.789: 99.8843% ( 6) 00:10:36.316 40989.789 - 41228.102: 99.9518% ( 7) 00:10:36.316 41228.102 - 41466.415: 100.0000% ( 5) 00:10:36.316 00:10:36.316 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:36.316 ============================================================================== 00:10:36.316 Range in us Cumulative IO count 00:10:36.316 8936.727 - 8996.305: 0.0095% ( 1) 00:10:36.316 9055.884 - 9115.462: 0.0286% ( 2) 00:10:36.316 9115.462 - 9175.040: 0.0857% ( 6) 00:10:36.316 9175.040 - 9234.618: 0.1810% ( 10) 00:10:36.316 9234.618 - 9294.196: 0.3335% ( 16) 00:10:36.316 9294.196 - 9353.775: 0.4383% ( 11) 00:10:36.316 9353.775 - 9413.353: 0.5335% ( 10) 00:10:36.316 9413.353 - 9472.931: 0.6288% ( 10) 00:10:36.316 9472.931 - 9532.509: 0.7146% ( 9) 00:10:36.316 9532.509 - 9592.087: 0.8479% ( 14) 00:10:36.316 9592.087 - 9651.665: 1.0957% ( 26) 00:10:36.316 9651.665 - 9711.244: 1.5434% ( 47) 00:10:36.316 9711.244 - 9770.822: 2.2008% ( 69) 00:10:36.316 9770.822 - 9830.400: 2.8201% ( 65) 00:10:36.316 9830.400 - 9889.978: 3.4013% ( 61) 00:10:36.316 9889.978 - 9949.556: 3.9634% ( 59) 00:10:36.316 9949.556 - 10009.135: 4.6589% ( 73) 00:10:36.316 10009.135 - 10068.713: 5.4688% ( 85) 00:10:36.316 10068.713 - 10128.291: 6.2691% ( 84) 00:10:36.316 10128.291 - 10187.869: 7.0503% ( 82) 00:10:36.316 10187.869 - 10247.447: 7.8887% ( 88) 00:10:36.316 10247.447 - 10307.025: 8.8510% ( 101) 00:10:36.316 10307.025 - 10366.604: 9.8323% ( 103) 00:10:36.316 10366.604 - 10426.182: 11.0232% ( 125) 00:10:36.316 10426.182 - 10485.760: 12.1189% ( 115) 00:10:36.316 10485.760 - 10545.338: 13.3956% ( 134) 00:10:36.316 10545.338 - 10604.916: 14.5865% ( 125) 00:10:36.316 10604.916 - 10664.495: 15.8537% ( 133) 00:10:36.316 10664.495 - 10724.073: 17.1113% ( 132) 00:10:36.316 10724.073 - 10783.651: 18.3594% ( 131) 00:10:36.316 10783.651 - 10843.229: 19.6646% ( 137) 00:10:36.316 10843.229 - 10902.807: 20.9985% ( 140) 00:10:36.316 10902.807 - 10962.385: 22.3514% ( 142) 00:10:36.316 10962.385 - 11021.964: 23.7043% ( 142) 00:10:36.316 11021.964 - 11081.542: 25.1143% ( 148) 00:10:36.316 11081.542 - 11141.120: 26.6101% ( 157) 00:10:36.316 11141.120 - 11200.698: 28.1250% ( 159) 00:10:36.316 11200.698 - 11260.276: 29.8399% ( 180) 00:10:36.316 11260.276 - 11319.855: 31.4882% ( 173) 00:10:36.316 11319.855 - 11379.433: 33.2793% ( 188) 00:10:36.316 11379.433 - 11439.011: 35.0991% ( 191) 00:10:36.316 11439.011 - 11498.589: 37.0046% ( 200) 00:10:36.316 11498.589 - 11558.167: 38.8434% ( 193) 00:10:36.316 11558.167 - 11617.745: 40.6822% ( 193) 00:10:36.316 11617.745 - 11677.324: 42.4638% ( 187) 00:10:36.316 11677.324 - 11736.902: 44.2931% ( 192) 00:10:36.316 11736.902 - 11796.480: 45.9794% ( 177) 00:10:36.316 11796.480 - 11856.058: 47.7325% ( 184) 00:10:36.316 11856.058 - 11915.636: 49.4474% ( 180) 00:10:36.316 11915.636 - 11975.215: 51.1909% ( 183) 00:10:36.317 11975.215 - 12034.793: 52.7439% ( 163) 00:10:36.317 12034.793 - 12094.371: 54.2016% ( 153) 00:10:36.317 12094.371 - 12153.949: 55.6021% ( 147) 00:10:36.317 12153.949 - 12213.527: 57.0789% ( 155) 00:10:36.317 12213.527 - 12273.105: 58.4032% ( 139) 00:10:36.317 12273.105 - 12332.684: 59.7656% ( 143) 00:10:36.317 12332.684 - 12392.262: 61.0709% ( 137) 00:10:36.317 12392.262 - 12451.840: 62.4333% ( 143) 00:10:36.317 12451.840 - 12511.418: 63.9386% ( 158) 00:10:36.317 12511.418 - 12570.996: 65.4059% ( 154) 00:10:36.317 12570.996 - 12630.575: 66.8826% ( 155) 00:10:36.317 12630.575 - 12690.153: 68.3594% ( 155) 00:10:36.317 12690.153 - 12749.731: 69.7694% ( 148) 00:10:36.317 12749.731 - 12809.309: 71.0461% ( 134) 00:10:36.317 12809.309 - 12868.887: 72.2942% ( 131) 00:10:36.317 12868.887 - 12928.465: 73.4851% ( 125) 00:10:36.317 12928.465 - 12988.044: 74.5522% ( 112) 00:10:36.317 12988.044 - 13047.622: 75.6479% ( 115) 00:10:36.317 13047.622 - 13107.200: 76.7340% ( 114) 00:10:36.317 13107.200 - 13166.778: 77.7915% ( 111) 00:10:36.317 13166.778 - 13226.356: 78.7633% ( 102) 00:10:36.317 13226.356 - 13285.935: 79.8304% ( 112) 00:10:36.317 13285.935 - 13345.513: 80.8308% ( 105) 00:10:36.317 13345.513 - 13405.091: 81.9074% ( 113) 00:10:36.317 13405.091 - 13464.669: 82.8316% ( 97) 00:10:36.317 13464.669 - 13524.247: 83.6700% ( 88) 00:10:36.317 13524.247 - 13583.825: 84.4798% ( 85) 00:10:36.317 13583.825 - 13643.404: 85.2801% ( 84) 00:10:36.317 13643.404 - 13702.982: 86.0042% ( 76) 00:10:36.317 13702.982 - 13762.560: 86.7092% ( 74) 00:10:36.317 13762.560 - 13822.138: 87.3857% ( 71) 00:10:36.317 13822.138 - 13881.716: 88.0716% ( 72) 00:10:36.317 13881.716 - 13941.295: 88.7386% ( 70) 00:10:36.317 13941.295 - 14000.873: 89.4531% ( 75) 00:10:36.317 14000.873 - 14060.451: 90.1105% ( 69) 00:10:36.317 14060.451 - 14120.029: 90.7203% ( 64) 00:10:36.317 14120.029 - 14179.607: 91.3300% ( 64) 00:10:36.317 14179.607 - 14239.185: 91.9207% ( 62) 00:10:36.317 14239.185 - 14298.764: 92.5305% ( 64) 00:10:36.317 14298.764 - 14358.342: 93.0640% ( 56) 00:10:36.317 14358.342 - 14417.920: 93.5785% ( 54) 00:10:36.317 14417.920 - 14477.498: 94.0930% ( 54) 00:10:36.317 14477.498 - 14537.076: 94.5694% ( 50) 00:10:36.317 14537.076 - 14596.655: 95.0362% ( 49) 00:10:36.317 14596.655 - 14656.233: 95.4078% ( 39) 00:10:36.317 14656.233 - 14715.811: 95.7889% ( 40) 00:10:36.317 14715.811 - 14775.389: 96.1033% ( 33) 00:10:36.317 14775.389 - 14834.967: 96.3891% ( 30) 00:10:36.317 14834.967 - 14894.545: 96.6368% ( 26) 00:10:36.317 14894.545 - 14954.124: 96.8464% ( 22) 00:10:36.317 14954.124 - 15013.702: 97.0560% ( 22) 00:10:36.317 15013.702 - 15073.280: 97.2656% ( 22) 00:10:36.317 15073.280 - 15132.858: 97.4657% ( 21) 00:10:36.317 15132.858 - 15192.436: 97.6658% ( 21) 00:10:36.317 15192.436 - 15252.015: 97.8373% ( 18) 00:10:36.317 15252.015 - 15371.171: 98.1040% ( 28) 00:10:36.317 15371.171 - 15490.327: 98.2755% ( 18) 00:10:36.317 15490.327 - 15609.484: 98.3613% ( 9) 00:10:36.317 15609.484 - 15728.640: 98.4184% ( 6) 00:10:36.317 15728.640 - 15847.796: 98.4661% ( 5) 00:10:36.317 15847.796 - 15966.953: 98.5232% ( 6) 00:10:36.317 15966.953 - 16086.109: 98.5709% ( 5) 00:10:36.317 16086.109 - 16205.265: 98.6280% ( 6) 00:10:36.317 16205.265 - 16324.422: 98.6757% ( 5) 00:10:36.317 16324.422 - 16443.578: 98.7424% ( 7) 00:10:36.317 16443.578 - 16562.735: 98.7805% ( 4) 00:10:36.317 21448.145 - 21567.302: 98.8091% ( 3) 00:10:36.317 21567.302 - 21686.458: 98.8472% ( 4) 00:10:36.317 21686.458 - 21805.615: 98.8758% ( 3) 00:10:36.317 21805.615 - 21924.771: 98.9139% ( 4) 00:10:36.317 21924.771 - 22043.927: 98.9425% ( 3) 00:10:36.317 22043.927 - 22163.084: 98.9806% ( 4) 00:10:36.317 22163.084 - 22282.240: 99.0091% ( 3) 00:10:36.317 22282.240 - 22401.396: 99.0473% ( 4) 00:10:36.317 22401.396 - 22520.553: 99.0758% ( 3) 00:10:36.317 22520.553 - 22639.709: 99.1139% ( 4) 00:10:36.317 22639.709 - 22758.865: 99.1521% ( 4) 00:10:36.317 22758.865 - 22878.022: 99.1806% ( 3) 00:10:36.317 22878.022 - 22997.178: 99.2188% ( 4) 00:10:36.317 22997.178 - 23116.335: 99.2473% ( 3) 00:10:36.317 23116.335 - 23235.491: 99.2854% ( 4) 00:10:36.317 23235.491 - 23354.647: 99.3140% ( 3) 00:10:36.317 23354.647 - 23473.804: 99.3521% ( 4) 00:10:36.317 23473.804 - 23592.960: 99.3902% ( 4) 00:10:36.317 23592.960 - 23712.116: 99.4188% ( 3) 00:10:36.317 23712.116 - 23831.273: 99.4569% ( 4) 00:10:36.317 23831.273 - 23950.429: 99.4855% ( 3) 00:10:36.317 23950.429 - 24069.585: 99.5236% ( 4) 00:10:36.317 24069.585 - 24188.742: 99.5522% ( 3) 00:10:36.317 24188.742 - 24307.898: 99.5903% ( 4) 00:10:36.317 24307.898 - 24427.055: 99.6189% ( 3) 00:10:36.317 24427.055 - 24546.211: 99.6570% ( 4) 00:10:36.317 24546.211 - 24665.367: 99.6951% ( 4) 00:10:36.317 24665.367 - 24784.524: 99.7237% ( 3) 00:10:36.317 24784.524 - 24903.680: 99.7618% ( 4) 00:10:36.317 24903.680 - 25022.836: 99.7904% ( 3) 00:10:36.317 25022.836 - 25141.993: 99.8190% ( 3) 00:10:36.317 25141.993 - 25261.149: 99.8476% ( 3) 00:10:36.317 25261.149 - 25380.305: 99.8857% ( 4) 00:10:36.317 25380.305 - 25499.462: 99.9238% ( 4) 00:10:36.317 25499.462 - 25618.618: 99.9524% ( 3) 00:10:36.317 25618.618 - 25737.775: 99.9905% ( 4) 00:10:36.317 25737.775 - 25856.931: 100.0000% ( 1) 00:10:36.317 00:10:36.317 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:36.317 ============================================================================== 00:10:36.317 Range in us Cumulative IO count 00:10:36.317 8996.305 - 9055.884: 0.0095% ( 1) 00:10:36.317 9055.884 - 9115.462: 0.0191% ( 1) 00:10:36.317 9115.462 - 9175.040: 0.1143% ( 10) 00:10:36.317 9175.040 - 9234.618: 0.1810% ( 7) 00:10:36.317 9234.618 - 9294.196: 0.3144% ( 14) 00:10:36.317 9294.196 - 9353.775: 0.3906% ( 8) 00:10:36.317 9353.775 - 9413.353: 0.5621% ( 18) 00:10:36.317 9413.353 - 9472.931: 0.7622% ( 21) 00:10:36.317 9472.931 - 9532.509: 0.9337% ( 18) 00:10:36.317 9532.509 - 9592.087: 1.0575% ( 13) 00:10:36.317 9592.087 - 9651.665: 1.2290% ( 18) 00:10:36.317 9651.665 - 9711.244: 1.5720% ( 36) 00:10:36.317 9711.244 - 9770.822: 2.0008% ( 45) 00:10:36.317 9770.822 - 9830.400: 2.6963% ( 73) 00:10:36.317 9830.400 - 9889.978: 3.5823% ( 93) 00:10:36.317 9889.978 - 9949.556: 4.3636% ( 82) 00:10:36.317 9949.556 - 10009.135: 5.0495% ( 72) 00:10:36.317 10009.135 - 10068.713: 5.6784% ( 66) 00:10:36.317 10068.713 - 10128.291: 6.3929% ( 75) 00:10:36.317 10128.291 - 10187.869: 7.2694% ( 92) 00:10:36.317 10187.869 - 10247.447: 8.1174% ( 89) 00:10:36.317 10247.447 - 10307.025: 9.0225% ( 95) 00:10:36.317 10307.025 - 10366.604: 10.1086% ( 114) 00:10:36.317 10366.604 - 10426.182: 11.0518% ( 99) 00:10:36.317 10426.182 - 10485.760: 12.1094% ( 111) 00:10:36.317 10485.760 - 10545.338: 13.3479% ( 130) 00:10:36.317 10545.338 - 10604.916: 14.5389% ( 125) 00:10:36.317 10604.916 - 10664.495: 15.6726% ( 119) 00:10:36.317 10664.495 - 10724.073: 16.9493% ( 134) 00:10:36.317 10724.073 - 10783.651: 18.1307% ( 124) 00:10:36.317 10783.651 - 10843.229: 19.3693% ( 130) 00:10:36.317 10843.229 - 10902.807: 20.5983% ( 129) 00:10:36.317 10902.807 - 10962.385: 22.0370% ( 151) 00:10:36.317 10962.385 - 11021.964: 23.4375% ( 147) 00:10:36.317 11021.964 - 11081.542: 24.8666% ( 150) 00:10:36.317 11081.542 - 11141.120: 26.4005% ( 161) 00:10:36.317 11141.120 - 11200.698: 27.9630% ( 164) 00:10:36.317 11200.698 - 11260.276: 29.6208% ( 174) 00:10:36.317 11260.276 - 11319.855: 31.2500% ( 171) 00:10:36.317 11319.855 - 11379.433: 33.0126% ( 185) 00:10:36.317 11379.433 - 11439.011: 34.8514% ( 193) 00:10:36.317 11439.011 - 11498.589: 36.6330% ( 187) 00:10:36.317 11498.589 - 11558.167: 38.4909% ( 195) 00:10:36.317 11558.167 - 11617.745: 40.3868% ( 199) 00:10:36.317 11617.745 - 11677.324: 42.0827% ( 178) 00:10:36.317 11677.324 - 11736.902: 43.8643% ( 187) 00:10:36.317 11736.902 - 11796.480: 45.6460% ( 187) 00:10:36.317 11796.480 - 11856.058: 47.3704% ( 181) 00:10:36.317 11856.058 - 11915.636: 49.2664% ( 199) 00:10:36.317 11915.636 - 11975.215: 51.0480% ( 187) 00:10:36.317 11975.215 - 12034.793: 52.7439% ( 178) 00:10:36.317 12034.793 - 12094.371: 54.2397% ( 157) 00:10:36.317 12094.371 - 12153.949: 55.7069% ( 154) 00:10:36.317 12153.949 - 12213.527: 57.1456% ( 151) 00:10:36.317 12213.527 - 12273.105: 58.5842% ( 151) 00:10:36.317 12273.105 - 12332.684: 59.9752% ( 146) 00:10:36.317 12332.684 - 12392.262: 61.3472% ( 144) 00:10:36.317 12392.262 - 12451.840: 62.6524% ( 137) 00:10:36.317 12451.840 - 12511.418: 63.9386% ( 135) 00:10:36.317 12511.418 - 12570.996: 65.3868% ( 152) 00:10:36.317 12570.996 - 12630.575: 66.7778% ( 146) 00:10:36.317 12630.575 - 12690.153: 68.2641% ( 156) 00:10:36.317 12690.153 - 12749.731: 69.7694% ( 158) 00:10:36.317 12749.731 - 12809.309: 71.2938% ( 160) 00:10:36.317 12809.309 - 12868.887: 72.5133% ( 128) 00:10:36.317 12868.887 - 12928.465: 73.5709% ( 111) 00:10:36.317 12928.465 - 12988.044: 74.5332% ( 101) 00:10:36.317 12988.044 - 13047.622: 75.4859% ( 100) 00:10:36.317 13047.622 - 13107.200: 76.4768% ( 104) 00:10:36.317 13107.200 - 13166.778: 77.4200% ( 99) 00:10:36.317 13166.778 - 13226.356: 78.4013% ( 103) 00:10:36.317 13226.356 - 13285.935: 79.4398% ( 109) 00:10:36.317 13285.935 - 13345.513: 80.4592% ( 107) 00:10:36.317 13345.513 - 13405.091: 81.4120% ( 100) 00:10:36.317 13405.091 - 13464.669: 82.2599% ( 89) 00:10:36.317 13464.669 - 13524.247: 83.1936% ( 98) 00:10:36.317 13524.247 - 13583.825: 84.1368% ( 99) 00:10:36.317 13583.825 - 13643.404: 85.0229% ( 93) 00:10:36.317 13643.404 - 13702.982: 85.7946% ( 81) 00:10:36.317 13702.982 - 13762.560: 86.4710% ( 71) 00:10:36.318 13762.560 - 13822.138: 87.1189% ( 68) 00:10:36.318 13822.138 - 13881.716: 87.7572% ( 67) 00:10:36.318 13881.716 - 13941.295: 88.4337% ( 71) 00:10:36.318 13941.295 - 14000.873: 89.0911% ( 69) 00:10:36.318 14000.873 - 14060.451: 89.7389% ( 68) 00:10:36.318 14060.451 - 14120.029: 90.3487% ( 64) 00:10:36.318 14120.029 - 14179.607: 91.0061% ( 69) 00:10:36.318 14179.607 - 14239.185: 91.6159% ( 64) 00:10:36.318 14239.185 - 14298.764: 92.2066% ( 62) 00:10:36.318 14298.764 - 14358.342: 92.8068% ( 63) 00:10:36.318 14358.342 - 14417.920: 93.3308% ( 55) 00:10:36.318 14417.920 - 14477.498: 93.8739% ( 57) 00:10:36.318 14477.498 - 14537.076: 94.3216% ( 47) 00:10:36.318 14537.076 - 14596.655: 94.7409% ( 44) 00:10:36.318 14596.655 - 14656.233: 95.1124% ( 39) 00:10:36.318 14656.233 - 14715.811: 95.4649% ( 37) 00:10:36.318 14715.811 - 14775.389: 95.7889% ( 34) 00:10:36.318 14775.389 - 14834.967: 96.0747% ( 30) 00:10:36.318 14834.967 - 14894.545: 96.3319% ( 27) 00:10:36.318 14894.545 - 14954.124: 96.5034% ( 18) 00:10:36.318 14954.124 - 15013.702: 96.6654% ( 17) 00:10:36.318 15013.702 - 15073.280: 96.7988% ( 14) 00:10:36.318 15073.280 - 15132.858: 96.9798% ( 19) 00:10:36.318 15132.858 - 15192.436: 97.1418% ( 17) 00:10:36.318 15192.436 - 15252.015: 97.2847% ( 15) 00:10:36.318 15252.015 - 15371.171: 97.5991% ( 33) 00:10:36.318 15371.171 - 15490.327: 97.8563% ( 27) 00:10:36.318 15490.327 - 15609.484: 98.0755% ( 23) 00:10:36.318 15609.484 - 15728.640: 98.2088% ( 14) 00:10:36.318 15728.640 - 15847.796: 98.3232% ( 12) 00:10:36.318 15847.796 - 15966.953: 98.4089% ( 9) 00:10:36.318 15966.953 - 16086.109: 98.5232% ( 12) 00:10:36.318 16086.109 - 16205.265: 98.6376% ( 12) 00:10:36.318 16205.265 - 16324.422: 98.7329% ( 10) 00:10:36.318 16324.422 - 16443.578: 98.7805% ( 5) 00:10:36.318 19422.487 - 19541.644: 98.8472% ( 7) 00:10:36.318 19541.644 - 19660.800: 98.8758% ( 3) 00:10:36.318 19660.800 - 19779.956: 98.9043% ( 3) 00:10:36.318 19779.956 - 19899.113: 98.9329% ( 3) 00:10:36.318 19899.113 - 20018.269: 98.9615% ( 3) 00:10:36.318 20018.269 - 20137.425: 98.9996% ( 4) 00:10:36.318 20137.425 - 20256.582: 99.0282% ( 3) 00:10:36.318 20256.582 - 20375.738: 99.0663% ( 4) 00:10:36.318 20375.738 - 20494.895: 99.0949% ( 3) 00:10:36.318 20494.895 - 20614.051: 99.1330% ( 4) 00:10:36.318 20614.051 - 20733.207: 99.1616% ( 3) 00:10:36.318 20733.207 - 20852.364: 99.1902% ( 3) 00:10:36.318 20852.364 - 20971.520: 99.2188% ( 3) 00:10:36.318 20971.520 - 21090.676: 99.2569% ( 4) 00:10:36.318 21090.676 - 21209.833: 99.2854% ( 3) 00:10:36.318 21209.833 - 21328.989: 99.3236% ( 4) 00:10:36.318 21328.989 - 21448.145: 99.3521% ( 3) 00:10:36.318 21448.145 - 21567.302: 99.3902% ( 4) 00:10:36.318 21567.302 - 21686.458: 99.4188% ( 3) 00:10:36.318 21686.458 - 21805.615: 99.4474% ( 3) 00:10:36.318 21805.615 - 21924.771: 99.4760% ( 3) 00:10:36.318 21924.771 - 22043.927: 99.5046% ( 3) 00:10:36.318 22043.927 - 22163.084: 99.5332% ( 3) 00:10:36.318 22163.084 - 22282.240: 99.5713% ( 4) 00:10:36.318 22282.240 - 22401.396: 99.5998% ( 3) 00:10:36.318 22401.396 - 22520.553: 99.6380% ( 4) 00:10:36.318 22520.553 - 22639.709: 99.6665% ( 3) 00:10:36.318 22639.709 - 22758.865: 99.7046% ( 4) 00:10:36.318 22758.865 - 22878.022: 99.7332% ( 3) 00:10:36.318 22878.022 - 22997.178: 99.7713% ( 4) 00:10:36.318 22997.178 - 23116.335: 99.7999% ( 3) 00:10:36.318 23116.335 - 23235.491: 99.8285% ( 3) 00:10:36.318 23235.491 - 23354.647: 99.8666% ( 4) 00:10:36.318 23354.647 - 23473.804: 99.8952% ( 3) 00:10:36.318 23473.804 - 23592.960: 99.9238% ( 3) 00:10:36.318 23592.960 - 23712.116: 99.9619% ( 4) 00:10:36.318 23712.116 - 23831.273: 99.9905% ( 3) 00:10:36.318 23831.273 - 23950.429: 100.0000% ( 1) 00:10:36.318 00:10:36.318 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:36.318 ============================================================================== 00:10:36.318 Range in us Cumulative IO count 00:10:36.318 9234.618 - 9294.196: 0.0286% ( 3) 00:10:36.318 9294.196 - 9353.775: 0.1334% ( 11) 00:10:36.318 9353.775 - 9413.353: 0.2572% ( 13) 00:10:36.318 9413.353 - 9472.931: 0.4478% ( 20) 00:10:36.318 9472.931 - 9532.509: 0.6669% ( 23) 00:10:36.318 9532.509 - 9592.087: 0.9146% ( 26) 00:10:36.318 9592.087 - 9651.665: 1.2767% ( 38) 00:10:36.318 9651.665 - 9711.244: 1.5625% ( 30) 00:10:36.318 9711.244 - 9770.822: 2.1056% ( 57) 00:10:36.318 9770.822 - 9830.400: 2.5819% ( 50) 00:10:36.318 9830.400 - 9889.978: 3.0678% ( 51) 00:10:36.318 9889.978 - 9949.556: 3.8967% ( 87) 00:10:36.318 9949.556 - 10009.135: 4.7447% ( 89) 00:10:36.318 10009.135 - 10068.713: 5.6498% ( 95) 00:10:36.318 10068.713 - 10128.291: 6.6311% ( 103) 00:10:36.318 10128.291 - 10187.869: 7.6029% ( 102) 00:10:36.318 10187.869 - 10247.447: 8.5747% ( 102) 00:10:36.318 10247.447 - 10307.025: 9.6132% ( 109) 00:10:36.318 10307.025 - 10366.604: 10.6898% ( 113) 00:10:36.318 10366.604 - 10426.182: 11.7092% ( 107) 00:10:36.318 10426.182 - 10485.760: 12.8239% ( 117) 00:10:36.318 10485.760 - 10545.338: 13.9768% ( 121) 00:10:36.318 10545.338 - 10604.916: 15.1391% ( 122) 00:10:36.318 10604.916 - 10664.495: 16.2633% ( 118) 00:10:36.318 10664.495 - 10724.073: 17.4638% ( 126) 00:10:36.318 10724.073 - 10783.651: 18.7309% ( 133) 00:10:36.318 10783.651 - 10843.229: 19.9981% ( 133) 00:10:36.318 10843.229 - 10902.807: 21.2843% ( 135) 00:10:36.318 10902.807 - 10962.385: 22.5419% ( 132) 00:10:36.318 10962.385 - 11021.964: 23.8662% ( 139) 00:10:36.318 11021.964 - 11081.542: 25.1810% ( 138) 00:10:36.318 11081.542 - 11141.120: 26.4958% ( 138) 00:10:36.318 11141.120 - 11200.698: 27.9345% ( 151) 00:10:36.318 11200.698 - 11260.276: 29.4303% ( 157) 00:10:36.318 11260.276 - 11319.855: 31.1643% ( 182) 00:10:36.318 11319.855 - 11379.433: 32.9840% ( 191) 00:10:36.318 11379.433 - 11439.011: 34.8228% ( 193) 00:10:36.318 11439.011 - 11498.589: 36.5663% ( 183) 00:10:36.318 11498.589 - 11558.167: 38.2527% ( 177) 00:10:36.318 11558.167 - 11617.745: 39.9581% ( 179) 00:10:36.318 11617.745 - 11677.324: 41.6730% ( 180) 00:10:36.318 11677.324 - 11736.902: 43.3880% ( 180) 00:10:36.318 11736.902 - 11796.480: 44.9219% ( 161) 00:10:36.318 11796.480 - 11856.058: 46.6082% ( 177) 00:10:36.318 11856.058 - 11915.636: 48.2088% ( 168) 00:10:36.318 11915.636 - 11975.215: 49.9143% ( 179) 00:10:36.318 11975.215 - 12034.793: 51.5720% ( 174) 00:10:36.318 12034.793 - 12094.371: 53.1250% ( 163) 00:10:36.318 12094.371 - 12153.949: 54.6970% ( 165) 00:10:36.318 12153.949 - 12213.527: 56.3072% ( 169) 00:10:36.318 12213.527 - 12273.105: 57.8316% ( 160) 00:10:36.318 12273.105 - 12332.684: 59.3369% ( 158) 00:10:36.318 12332.684 - 12392.262: 60.7946% ( 153) 00:10:36.318 12392.262 - 12451.840: 62.2713% ( 155) 00:10:36.318 12451.840 - 12511.418: 63.7481% ( 155) 00:10:36.318 12511.418 - 12570.996: 65.1105% ( 143) 00:10:36.318 12570.996 - 12630.575: 66.5301% ( 149) 00:10:36.318 12630.575 - 12690.153: 67.8449% ( 138) 00:10:36.318 12690.153 - 12749.731: 69.1406% ( 136) 00:10:36.318 12749.731 - 12809.309: 70.4649% ( 139) 00:10:36.318 12809.309 - 12868.887: 71.6463% ( 124) 00:10:36.318 12868.887 - 12928.465: 72.7801% ( 119) 00:10:36.318 12928.465 - 12988.044: 73.8662% ( 114) 00:10:36.318 12988.044 - 13047.622: 74.9143% ( 110) 00:10:36.318 13047.622 - 13107.200: 75.9527% ( 109) 00:10:36.318 13107.200 - 13166.778: 76.9055% ( 100) 00:10:36.318 13166.778 - 13226.356: 77.9154% ( 106) 00:10:36.318 13226.356 - 13285.935: 78.9253% ( 106) 00:10:36.318 13285.935 - 13345.513: 80.0210% ( 115) 00:10:36.318 13345.513 - 13405.091: 81.0309% ( 106) 00:10:36.318 13405.091 - 13464.669: 82.0217% ( 104) 00:10:36.318 13464.669 - 13524.247: 82.9459% ( 97) 00:10:36.318 13524.247 - 13583.825: 83.8415% ( 94) 00:10:36.319 13583.825 - 13643.404: 84.6799% ( 88) 00:10:36.319 13643.404 - 13702.982: 85.4707% ( 83) 00:10:36.319 13702.982 - 13762.560: 86.3567% ( 93) 00:10:36.319 13762.560 - 13822.138: 87.1951% ( 88) 00:10:36.319 13822.138 - 13881.716: 87.9383% ( 78) 00:10:36.319 13881.716 - 13941.295: 88.7100% ( 81) 00:10:36.319 13941.295 - 14000.873: 89.4436% ( 77) 00:10:36.319 14000.873 - 14060.451: 90.1582% ( 75) 00:10:36.319 14060.451 - 14120.029: 90.8441% ( 72) 00:10:36.319 14120.029 - 14179.607: 91.4825% ( 67) 00:10:36.319 14179.607 - 14239.185: 92.1208% ( 67) 00:10:36.319 14239.185 - 14298.764: 92.7496% ( 66) 00:10:36.319 14298.764 - 14358.342: 93.3975% ( 68) 00:10:36.319 14358.342 - 14417.920: 93.9882% ( 62) 00:10:36.319 14417.920 - 14477.498: 94.5217% ( 56) 00:10:36.319 14477.498 - 14537.076: 95.0171% ( 52) 00:10:36.319 14537.076 - 14596.655: 95.4173% ( 42) 00:10:36.319 14596.655 - 14656.233: 95.7889% ( 39) 00:10:36.319 14656.233 - 14715.811: 96.0938% ( 32) 00:10:36.319 14715.811 - 14775.389: 96.3319% ( 25) 00:10:36.319 14775.389 - 14834.967: 96.5320% ( 21) 00:10:36.319 14834.967 - 14894.545: 96.7035% ( 18) 00:10:36.319 14894.545 - 14954.124: 96.8178% ( 12) 00:10:36.319 14954.124 - 15013.702: 96.9131% ( 10) 00:10:36.319 15013.702 - 15073.280: 97.0370% ( 13) 00:10:36.319 15073.280 - 15132.858: 97.1513% ( 12) 00:10:36.319 15132.858 - 15192.436: 97.2561% ( 11) 00:10:36.319 15192.436 - 15252.015: 97.3800% ( 13) 00:10:36.319 15252.015 - 15371.171: 97.6086% ( 24) 00:10:36.319 15371.171 - 15490.327: 97.8182% ( 22) 00:10:36.319 15490.327 - 15609.484: 98.0469% ( 24) 00:10:36.319 15609.484 - 15728.640: 98.2565% ( 22) 00:10:36.319 15728.640 - 15847.796: 98.4184% ( 17) 00:10:36.319 15847.796 - 15966.953: 98.5709% ( 16) 00:10:36.319 15966.953 - 16086.109: 98.6662% ( 10) 00:10:36.319 16086.109 - 16205.265: 98.7138% ( 5) 00:10:36.319 16205.265 - 16324.422: 98.7710% ( 6) 00:10:36.577 16324.422 - 16443.578: 98.7805% ( 1) 00:10:36.577 17992.611 - 18111.767: 98.8186% ( 4) 00:10:36.577 18111.767 - 18230.924: 98.8472% ( 3) 00:10:36.577 18230.924 - 18350.080: 98.8758% ( 3) 00:10:36.577 18350.080 - 18469.236: 98.9043% ( 3) 00:10:36.577 18469.236 - 18588.393: 98.9425% ( 4) 00:10:36.577 18588.393 - 18707.549: 98.9710% ( 3) 00:10:36.577 18707.549 - 18826.705: 98.9996% ( 3) 00:10:36.577 18826.705 - 18945.862: 99.0377% ( 4) 00:10:36.577 18945.862 - 19065.018: 99.0663% ( 3) 00:10:36.577 19065.018 - 19184.175: 99.1044% ( 4) 00:10:36.577 19184.175 - 19303.331: 99.1330% ( 3) 00:10:36.577 19303.331 - 19422.487: 99.1711% ( 4) 00:10:36.577 19422.487 - 19541.644: 99.1997% ( 3) 00:10:36.577 19541.644 - 19660.800: 99.2283% ( 3) 00:10:36.577 19660.800 - 19779.956: 99.2664% ( 4) 00:10:36.577 19779.956 - 19899.113: 99.2950% ( 3) 00:10:36.577 19899.113 - 20018.269: 99.3331% ( 4) 00:10:36.577 20018.269 - 20137.425: 99.3617% ( 3) 00:10:36.577 20137.425 - 20256.582: 99.3998% ( 4) 00:10:36.577 20256.582 - 20375.738: 99.4284% ( 3) 00:10:36.577 20375.738 - 20494.895: 99.4665% ( 4) 00:10:36.577 20494.895 - 20614.051: 99.4950% ( 3) 00:10:36.577 20614.051 - 20733.207: 99.5332% ( 4) 00:10:36.577 20733.207 - 20852.364: 99.5617% ( 3) 00:10:36.577 20852.364 - 20971.520: 99.5998% ( 4) 00:10:36.577 20971.520 - 21090.676: 99.6284% ( 3) 00:10:36.577 21090.676 - 21209.833: 99.6570% ( 3) 00:10:36.577 21209.833 - 21328.989: 99.6856% ( 3) 00:10:36.577 21328.989 - 21448.145: 99.7237% ( 4) 00:10:36.577 21448.145 - 21567.302: 99.7618% ( 4) 00:10:36.577 21567.302 - 21686.458: 99.7904% ( 3) 00:10:36.577 21686.458 - 21805.615: 99.8190% ( 3) 00:10:36.577 21805.615 - 21924.771: 99.8571% ( 4) 00:10:36.577 21924.771 - 22043.927: 99.8857% ( 3) 00:10:36.577 22043.927 - 22163.084: 99.9238% ( 4) 00:10:36.577 22163.084 - 22282.240: 99.9524% ( 3) 00:10:36.577 22282.240 - 22401.396: 99.9809% ( 3) 00:10:36.577 22401.396 - 22520.553: 100.0000% ( 2) 00:10:36.577 00:10:36.577 19:11:13 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:36.577 00:10:36.578 real 0m2.853s 00:10:36.578 user 0m2.460s 00:10:36.578 sys 0m0.284s 00:10:36.578 19:11:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:36.578 19:11:13 -- common/autotest_common.sh@10 -- # set +x 00:10:36.578 ************************************ 00:10:36.578 END TEST nvme_perf 00:10:36.578 ************************************ 00:10:36.578 19:11:13 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:36.578 19:11:13 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:10:36.578 19:11:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:36.578 19:11:13 -- common/autotest_common.sh@10 -- # set +x 00:10:36.578 ************************************ 00:10:36.578 START TEST nvme_hello_world 00:10:36.578 ************************************ 00:10:36.578 19:11:13 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:36.836 Initializing NVMe Controllers 00:10:36.836 Attached to 0000:00:06.0 00:10:36.836 Namespace ID: 1 size: 6GB 00:10:36.836 Attached to 0000:00:07.0 00:10:36.836 Namespace ID: 1 size: 5GB 00:10:36.836 Attached to 0000:00:09.0 00:10:36.836 Namespace ID: 1 size: 1GB 00:10:36.836 Attached to 0000:00:08.0 00:10:36.836 Namespace ID: 1 size: 4GB 00:10:36.836 Namespace ID: 2 size: 4GB 00:10:36.836 Namespace ID: 3 size: 4GB 00:10:36.836 Initialization complete. 00:10:36.836 INFO: using host memory buffer for IO 00:10:36.836 Hello world! 00:10:36.836 INFO: using host memory buffer for IO 00:10:36.836 Hello world! 00:10:36.836 INFO: using host memory buffer for IO 00:10:36.836 Hello world! 00:10:36.836 INFO: using host memory buffer for IO 00:10:36.836 Hello world! 00:10:36.836 INFO: using host memory buffer for IO 00:10:36.836 Hello world! 00:10:36.836 INFO: using host memory buffer for IO 00:10:36.836 Hello world! 00:10:36.836 00:10:36.836 real 0m0.392s 00:10:36.836 user 0m0.202s 00:10:36.836 sys 0m0.142s 00:10:36.836 19:11:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:36.836 ************************************ 00:10:36.836 END TEST nvme_hello_world 00:10:36.836 ************************************ 00:10:36.836 19:11:14 -- common/autotest_common.sh@10 -- # set +x 00:10:36.836 19:11:14 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:36.836 19:11:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:36.836 19:11:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:36.836 19:11:14 -- common/autotest_common.sh@10 -- # set +x 00:10:36.836 ************************************ 00:10:36.836 START TEST nvme_sgl 00:10:36.836 ************************************ 00:10:36.836 19:11:14 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:37.095 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:10:37.095 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:10:37.095 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:10:37.354 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:10:37.354 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:10:37.354 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:10:37.354 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:10:37.354 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:10:37.354 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:10:37.354 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:10:37.354 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:10:37.354 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:10:37.354 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:10:37.354 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:10:37.354 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:10:37.354 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:10:37.354 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:10:37.354 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:10:37.354 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:10:37.354 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:10:37.355 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:10:37.355 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:10:37.355 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:10:37.355 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:10:37.355 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:10:37.355 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:10:37.355 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:10:37.355 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:10:37.355 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:10:37.355 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:10:37.355 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:10:37.355 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:10:37.355 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:10:37.355 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:10:37.355 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:10:37.355 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:10:37.355 NVMe Readv/Writev Request test 00:10:37.355 Attached to 0000:00:06.0 00:10:37.355 Attached to 0000:00:07.0 00:10:37.355 Attached to 0000:00:09.0 00:10:37.355 Attached to 0000:00:08.0 00:10:37.355 0000:00:06.0: build_io_request_2 test passed 00:10:37.355 0000:00:06.0: build_io_request_4 test passed 00:10:37.355 0000:00:06.0: build_io_request_5 test passed 00:10:37.355 0000:00:06.0: build_io_request_6 test passed 00:10:37.355 0000:00:06.0: build_io_request_7 test passed 00:10:37.355 0000:00:06.0: build_io_request_10 test passed 00:10:37.355 0000:00:07.0: build_io_request_2 test passed 00:10:37.355 0000:00:07.0: build_io_request_4 test passed 00:10:37.355 0000:00:07.0: build_io_request_5 test passed 00:10:37.355 0000:00:07.0: build_io_request_6 test passed 00:10:37.355 0000:00:07.0: build_io_request_7 test passed 00:10:37.355 0000:00:07.0: build_io_request_10 test passed 00:10:37.355 Cleaning up... 00:10:37.613 ************************************ 00:10:37.613 END TEST nvme_sgl 00:10:37.613 ************************************ 00:10:37.613 00:10:37.613 real 0m0.556s 00:10:37.613 user 0m0.365s 00:10:37.613 sys 0m0.146s 00:10:37.613 19:11:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:37.613 19:11:14 -- common/autotest_common.sh@10 -- # set +x 00:10:37.613 19:11:14 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:37.613 19:11:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:37.613 19:11:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:37.613 19:11:14 -- common/autotest_common.sh@10 -- # set +x 00:10:37.613 ************************************ 00:10:37.613 START TEST nvme_e2edp 00:10:37.613 ************************************ 00:10:37.613 19:11:14 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:37.872 NVMe Write/Read with End-to-End data protection test 00:10:37.872 Attached to 0000:00:06.0 00:10:37.872 Attached to 0000:00:07.0 00:10:37.872 Attached to 0000:00:09.0 00:10:37.872 Attached to 0000:00:08.0 00:10:37.872 Cleaning up... 00:10:37.872 00:10:37.872 real 0m0.296s 00:10:37.872 user 0m0.110s 00:10:37.872 sys 0m0.136s 00:10:37.872 19:11:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:37.872 19:11:15 -- common/autotest_common.sh@10 -- # set +x 00:10:37.872 ************************************ 00:10:37.872 END TEST nvme_e2edp 00:10:37.872 ************************************ 00:10:37.872 19:11:15 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:37.872 19:11:15 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:37.872 19:11:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:37.872 19:11:15 -- common/autotest_common.sh@10 -- # set +x 00:10:37.872 ************************************ 00:10:37.872 START TEST nvme_reserve 00:10:37.872 ************************************ 00:10:37.872 19:11:15 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:38.130 ===================================================== 00:10:38.131 NVMe Controller at PCI bus 0, device 6, function 0 00:10:38.131 ===================================================== 00:10:38.131 Reservations: Not Supported 00:10:38.131 ===================================================== 00:10:38.131 NVMe Controller at PCI bus 0, device 7, function 0 00:10:38.131 ===================================================== 00:10:38.131 Reservations: Not Supported 00:10:38.131 ===================================================== 00:10:38.131 NVMe Controller at PCI bus 0, device 9, function 0 00:10:38.131 ===================================================== 00:10:38.131 Reservations: Not Supported 00:10:38.131 ===================================================== 00:10:38.131 NVMe Controller at PCI bus 0, device 8, function 0 00:10:38.131 ===================================================== 00:10:38.131 Reservations: Not Supported 00:10:38.131 Reservation test passed 00:10:38.131 00:10:38.131 real 0m0.287s 00:10:38.131 user 0m0.100s 00:10:38.131 sys 0m0.144s 00:10:38.131 19:11:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:38.131 19:11:15 -- common/autotest_common.sh@10 -- # set +x 00:10:38.131 ************************************ 00:10:38.131 END TEST nvme_reserve 00:10:38.131 ************************************ 00:10:38.131 19:11:15 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:38.131 19:11:15 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:38.131 19:11:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:38.131 19:11:15 -- common/autotest_common.sh@10 -- # set +x 00:10:38.131 ************************************ 00:10:38.131 START TEST nvme_err_injection 00:10:38.131 ************************************ 00:10:38.131 19:11:15 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:38.698 NVMe Error Injection test 00:10:38.698 Attached to 0000:00:06.0 00:10:38.698 Attached to 0000:00:07.0 00:10:38.698 Attached to 0000:00:09.0 00:10:38.698 Attached to 0000:00:08.0 00:10:38.698 0000:00:06.0: get features failed as expected 00:10:38.698 0000:00:07.0: get features failed as expected 00:10:38.698 0000:00:09.0: get features failed as expected 00:10:38.698 0000:00:08.0: get features failed as expected 00:10:38.698 0000:00:06.0: get features successfully as expected 00:10:38.698 0000:00:07.0: get features successfully as expected 00:10:38.698 0000:00:09.0: get features successfully as expected 00:10:38.698 0000:00:08.0: get features successfully as expected 00:10:38.698 0000:00:06.0: read failed as expected 00:10:38.698 0000:00:07.0: read failed as expected 00:10:38.698 0000:00:09.0: read failed as expected 00:10:38.698 0000:00:08.0: read failed as expected 00:10:38.698 0000:00:06.0: read successfully as expected 00:10:38.698 0000:00:07.0: read successfully as expected 00:10:38.698 0000:00:09.0: read successfully as expected 00:10:38.698 0000:00:08.0: read successfully as expected 00:10:38.698 Cleaning up... 00:10:38.698 00:10:38.698 real 0m0.364s 00:10:38.698 user 0m0.170s 00:10:38.698 sys 0m0.145s 00:10:38.698 ************************************ 00:10:38.698 END TEST nvme_err_injection 00:10:38.698 ************************************ 00:10:38.698 19:11:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:38.698 19:11:15 -- common/autotest_common.sh@10 -- # set +x 00:10:38.698 19:11:15 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:38.698 19:11:15 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:10:38.698 19:11:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:38.698 19:11:15 -- common/autotest_common.sh@10 -- # set +x 00:10:38.698 ************************************ 00:10:38.698 START TEST nvme_overhead 00:10:38.698 ************************************ 00:10:38.698 19:11:15 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:40.073 Initializing NVMe Controllers 00:10:40.073 Attached to 0000:00:06.0 00:10:40.073 Attached to 0000:00:07.0 00:10:40.073 Attached to 0000:00:09.0 00:10:40.073 Attached to 0000:00:08.0 00:10:40.073 Initialization complete. Launching workers. 00:10:40.073 submit (in ns) avg, min, max = 15990.6, 12600.0, 108322.7 00:10:40.073 complete (in ns) avg, min, max = 10660.2, 8657.7, 83546.4 00:10:40.073 00:10:40.073 Submit histogram 00:10:40.073 ================ 00:10:40.073 Range in us Cumulative Count 00:10:40.073 12.567 - 12.625: 0.0097% ( 1) 00:10:40.073 13.440 - 13.498: 0.0195% ( 1) 00:10:40.073 13.498 - 13.556: 0.0389% ( 2) 00:10:40.073 13.556 - 13.615: 0.1460% ( 11) 00:10:40.073 13.615 - 13.673: 0.2336% ( 9) 00:10:40.073 13.673 - 13.731: 0.3407% ( 11) 00:10:40.073 13.731 - 13.789: 0.5354% ( 20) 00:10:40.073 13.789 - 13.847: 0.6620% ( 13) 00:10:40.073 13.847 - 13.905: 0.8664% ( 21) 00:10:40.073 13.905 - 13.964: 1.3337% ( 48) 00:10:40.073 13.964 - 14.022: 2.3657% ( 106) 00:10:40.073 14.022 - 14.080: 4.0985% ( 178) 00:10:40.073 14.080 - 14.138: 6.3279% ( 229) 00:10:40.073 14.138 - 14.196: 8.0900% ( 181) 00:10:40.073 14.196 - 14.255: 9.9202% ( 188) 00:10:40.073 14.255 - 14.313: 12.0327% ( 217) 00:10:40.073 14.313 - 14.371: 15.9560% ( 403) 00:10:40.073 14.371 - 14.429: 23.2379% ( 748) 00:10:40.073 14.429 - 14.487: 33.3820% ( 1042) 00:10:40.073 14.487 - 14.545: 44.2757% ( 1119) 00:10:40.073 14.545 - 14.604: 52.9887% ( 895) 00:10:40.073 14.604 - 14.662: 59.2874% ( 647) 00:10:40.073 14.662 - 14.720: 62.8602% ( 367) 00:10:40.073 14.720 - 14.778: 65.1480% ( 235) 00:10:40.073 14.778 - 14.836: 66.6861% ( 158) 00:10:40.073 14.836 - 14.895: 67.8738% ( 122) 00:10:40.073 14.895 - 15.011: 69.2173% ( 138) 00:10:40.073 15.011 - 15.127: 69.9864% ( 79) 00:10:40.073 15.127 - 15.244: 70.8139% ( 85) 00:10:40.073 15.244 - 15.360: 72.6441% ( 188) 00:10:40.073 15.360 - 15.476: 74.2407% ( 164) 00:10:40.073 15.476 - 15.593: 75.3115% ( 110) 00:10:40.074 15.593 - 15.709: 76.0709% ( 78) 00:10:40.074 15.709 - 15.825: 76.6160% ( 56) 00:10:40.074 15.825 - 15.942: 76.9568% ( 35) 00:10:40.074 15.942 - 16.058: 77.2683% ( 32) 00:10:40.074 16.058 - 16.175: 77.4241% ( 16) 00:10:40.074 16.175 - 16.291: 77.5896% ( 17) 00:10:40.074 16.291 - 16.407: 77.7453% ( 16) 00:10:40.074 16.407 - 16.524: 77.8816% ( 14) 00:10:40.074 16.524 - 16.640: 77.9206% ( 4) 00:10:40.074 16.640 - 16.756: 78.0082% ( 9) 00:10:40.074 16.756 - 16.873: 78.0569% ( 5) 00:10:40.074 16.873 - 16.989: 78.1153% ( 6) 00:10:40.074 16.989 - 17.105: 78.3100% ( 20) 00:10:40.074 17.105 - 17.222: 80.3836% ( 213) 00:10:40.074 17.222 - 17.338: 84.5892% ( 432) 00:10:40.074 17.338 - 17.455: 86.8867% ( 236) 00:10:40.074 17.455 - 17.571: 87.7336% ( 87) 00:10:40.074 17.571 - 17.687: 88.0841% ( 36) 00:10:40.074 17.687 - 17.804: 88.2496% ( 17) 00:10:40.074 17.804 - 17.920: 88.5027% ( 26) 00:10:40.074 17.920 - 18.036: 88.7948% ( 30) 00:10:40.074 18.036 - 18.153: 89.3010% ( 52) 00:10:40.074 18.153 - 18.269: 89.7878% ( 50) 00:10:40.074 18.269 - 18.385: 90.2259% ( 45) 00:10:40.074 18.385 - 18.502: 90.3914% ( 17) 00:10:40.074 18.502 - 18.618: 90.5082% ( 12) 00:10:40.074 18.618 - 18.735: 90.6055% ( 10) 00:10:40.074 18.735 - 18.851: 90.6834% ( 8) 00:10:40.074 18.851 - 18.967: 90.7613% ( 8) 00:10:40.074 18.967 - 19.084: 90.8294% ( 7) 00:10:40.074 19.084 - 19.200: 90.8781% ( 5) 00:10:40.074 19.200 - 19.316: 90.9268% ( 5) 00:10:40.074 19.316 - 19.433: 90.9852% ( 6) 00:10:40.074 19.433 - 19.549: 91.0728% ( 9) 00:10:40.074 19.549 - 19.665: 91.1312% ( 6) 00:10:40.074 19.665 - 19.782: 91.2286% ( 10) 00:10:40.074 19.782 - 19.898: 91.3259% ( 10) 00:10:40.074 19.898 - 20.015: 91.5401% ( 22) 00:10:40.074 20.015 - 20.131: 91.6861% ( 15) 00:10:40.074 20.131 - 20.247: 91.8808% ( 20) 00:10:40.074 20.247 - 20.364: 92.0755% ( 20) 00:10:40.074 20.364 - 20.480: 92.2021% ( 13) 00:10:40.074 20.480 - 20.596: 92.5136% ( 32) 00:10:40.074 20.596 - 20.713: 92.7278% ( 22) 00:10:40.074 20.713 - 20.829: 92.8933% ( 17) 00:10:40.074 20.829 - 20.945: 93.0101% ( 12) 00:10:40.074 20.945 - 21.062: 93.1464% ( 14) 00:10:40.074 21.062 - 21.178: 93.2535% ( 11) 00:10:40.074 21.178 - 21.295: 93.3898% ( 14) 00:10:40.074 21.295 - 21.411: 93.4774% ( 9) 00:10:40.074 21.411 - 21.527: 93.6040% ( 13) 00:10:40.074 21.527 - 21.644: 93.7597% ( 16) 00:10:40.074 21.644 - 21.760: 93.9155% ( 16) 00:10:40.074 21.760 - 21.876: 94.0323% ( 12) 00:10:40.074 21.876 - 21.993: 94.1199% ( 9) 00:10:40.074 21.993 - 22.109: 94.2660% ( 15) 00:10:40.074 22.109 - 22.225: 94.3341% ( 7) 00:10:40.074 22.225 - 22.342: 94.3828% ( 5) 00:10:40.074 22.342 - 22.458: 94.4412% ( 6) 00:10:40.074 22.458 - 22.575: 94.5191% ( 8) 00:10:40.074 22.575 - 22.691: 94.5775% ( 6) 00:10:40.074 22.691 - 22.807: 94.6846% ( 11) 00:10:40.074 22.807 - 22.924: 94.7430% ( 6) 00:10:40.074 22.924 - 23.040: 94.7819% ( 4) 00:10:40.074 23.040 - 23.156: 94.8501% ( 7) 00:10:40.074 23.156 - 23.273: 94.9085% ( 6) 00:10:40.074 23.273 - 23.389: 94.9669% ( 6) 00:10:40.074 23.389 - 23.505: 95.0156% ( 5) 00:10:40.074 23.505 - 23.622: 95.0837% ( 7) 00:10:40.074 23.622 - 23.738: 95.1811% ( 10) 00:10:40.074 23.738 - 23.855: 95.2882% ( 11) 00:10:40.074 23.855 - 23.971: 95.3660% ( 8) 00:10:40.074 23.971 - 24.087: 95.4147% ( 5) 00:10:40.074 24.087 - 24.204: 95.4829% ( 7) 00:10:40.074 24.204 - 24.320: 95.5121% ( 3) 00:10:40.074 24.320 - 24.436: 95.5510% ( 4) 00:10:40.074 24.436 - 24.553: 95.5802% ( 3) 00:10:40.074 24.553 - 24.669: 95.5997% ( 2) 00:10:40.074 24.669 - 24.785: 95.6094% ( 1) 00:10:40.074 24.785 - 24.902: 95.6386% ( 3) 00:10:40.074 24.902 - 25.018: 95.7068% ( 7) 00:10:40.074 25.018 - 25.135: 95.7457% ( 4) 00:10:40.074 25.135 - 25.251: 95.7847% ( 4) 00:10:40.074 25.251 - 25.367: 95.8139% ( 3) 00:10:40.074 25.367 - 25.484: 95.8528% ( 4) 00:10:40.074 25.484 - 25.600: 95.8917% ( 4) 00:10:40.074 25.600 - 25.716: 95.9404% ( 5) 00:10:40.074 25.716 - 25.833: 95.9599% ( 2) 00:10:40.074 25.833 - 25.949: 95.9696% ( 1) 00:10:40.074 26.065 - 26.182: 95.9988% ( 3) 00:10:40.074 26.182 - 26.298: 96.0378% ( 4) 00:10:40.074 26.298 - 26.415: 96.0767% ( 4) 00:10:40.074 26.415 - 26.531: 96.1059% ( 3) 00:10:40.074 26.531 - 26.647: 96.1643% ( 6) 00:10:40.074 26.647 - 26.764: 96.1838% ( 2) 00:10:40.074 26.880 - 26.996: 96.1935% ( 1) 00:10:40.074 26.996 - 27.113: 96.2130% ( 2) 00:10:40.074 27.113 - 27.229: 96.2325% ( 2) 00:10:40.074 27.229 - 27.345: 96.2422% ( 1) 00:10:40.074 27.578 - 27.695: 96.2617% ( 2) 00:10:40.074 27.927 - 28.044: 96.2714% ( 1) 00:10:40.074 28.044 - 28.160: 96.2909% ( 2) 00:10:40.074 28.160 - 28.276: 96.3006% ( 1) 00:10:40.074 28.393 - 28.509: 96.3201% ( 2) 00:10:40.074 28.509 - 28.625: 96.3298% ( 1) 00:10:40.074 28.625 - 28.742: 96.3396% ( 1) 00:10:40.074 28.742 - 28.858: 96.3688% ( 3) 00:10:40.074 28.858 - 28.975: 96.4077% ( 4) 00:10:40.074 28.975 - 29.091: 96.5732% ( 17) 00:10:40.074 29.091 - 29.207: 96.7095% ( 14) 00:10:40.074 29.207 - 29.324: 96.9529% ( 25) 00:10:40.074 29.324 - 29.440: 97.2060% ( 26) 00:10:40.074 29.440 - 29.556: 97.5857% ( 39) 00:10:40.074 29.556 - 29.673: 97.9361% ( 36) 00:10:40.074 29.673 - 29.789: 98.2379% ( 31) 00:10:40.074 29.789 - 30.022: 98.7636% ( 54) 00:10:40.074 30.022 - 30.255: 98.9875% ( 23) 00:10:40.074 30.255 - 30.487: 99.0654% ( 8) 00:10:40.074 30.487 - 30.720: 99.2114% ( 15) 00:10:40.074 30.720 - 30.953: 99.3477% ( 14) 00:10:40.074 30.953 - 31.185: 99.4354% ( 9) 00:10:40.074 31.185 - 31.418: 99.4938% ( 6) 00:10:40.074 31.418 - 31.651: 99.5132% ( 2) 00:10:40.074 31.651 - 31.884: 99.5327% ( 2) 00:10:40.074 32.116 - 32.349: 99.5717% ( 4) 00:10:40.074 32.582 - 32.815: 99.5814% ( 1) 00:10:40.074 33.280 - 33.513: 99.5911% ( 1) 00:10:40.074 34.211 - 34.444: 99.6009% ( 1) 00:10:40.074 34.676 - 34.909: 99.6203% ( 2) 00:10:40.074 34.909 - 35.142: 99.6398% ( 2) 00:10:40.074 35.375 - 35.607: 99.6690% ( 3) 00:10:40.074 35.607 - 35.840: 99.6885% ( 2) 00:10:40.074 35.840 - 36.073: 99.7079% ( 2) 00:10:40.074 36.073 - 36.305: 99.7177% ( 1) 00:10:40.074 36.305 - 36.538: 99.7274% ( 1) 00:10:40.074 36.538 - 36.771: 99.7566% ( 3) 00:10:40.074 37.004 - 37.236: 99.7761% ( 2) 00:10:40.074 37.236 - 37.469: 99.7858% ( 1) 00:10:40.074 37.469 - 37.702: 99.7956% ( 1) 00:10:40.074 37.702 - 37.935: 99.8053% ( 1) 00:10:40.074 37.935 - 38.167: 99.8150% ( 1) 00:10:40.074 38.400 - 38.633: 99.8442% ( 3) 00:10:40.074 38.865 - 39.098: 99.8540% ( 1) 00:10:40.074 39.796 - 40.029: 99.8637% ( 1) 00:10:40.074 41.658 - 41.891: 99.8734% ( 1) 00:10:40.074 43.055 - 43.287: 99.8832% ( 1) 00:10:40.074 44.218 - 44.451: 99.8929% ( 1) 00:10:40.074 45.615 - 45.847: 99.9026% ( 1) 00:10:40.074 46.313 - 46.545: 99.9221% ( 2) 00:10:40.074 46.545 - 46.778: 99.9319% ( 1) 00:10:40.074 52.829 - 53.062: 99.9416% ( 1) 00:10:40.074 64.233 - 64.698: 99.9611% ( 2) 00:10:40.074 66.560 - 67.025: 99.9708% ( 1) 00:10:40.074 75.869 - 76.335: 99.9805% ( 1) 00:10:40.074 82.851 - 83.316: 99.9903% ( 1) 00:10:40.074 107.985 - 108.451: 100.0000% ( 1) 00:10:40.074 00:10:40.074 Complete histogram 00:10:40.074 ================== 00:10:40.074 Range in us Cumulative Count 00:10:40.074 8.611 - 8.669: 0.0097% ( 1) 00:10:40.074 8.669 - 8.727: 0.0584% ( 5) 00:10:40.074 8.727 - 8.785: 0.1655% ( 11) 00:10:40.074 8.785 - 8.844: 0.2044% ( 4) 00:10:40.074 8.844 - 8.902: 0.3213% ( 12) 00:10:40.074 8.902 - 8.960: 0.6912% ( 38) 00:10:40.074 8.960 - 9.018: 1.8107% ( 115) 00:10:40.074 9.018 - 9.076: 3.2808% ( 151) 00:10:40.074 9.076 - 9.135: 4.4977% ( 125) 00:10:40.074 9.135 - 9.193: 5.4614% ( 99) 00:10:40.074 9.193 - 9.251: 7.3988% ( 199) 00:10:40.074 9.251 - 9.309: 14.2134% ( 700) 00:10:40.074 9.309 - 9.367: 27.0347% ( 1317) 00:10:40.074 9.367 - 9.425: 41.0339% ( 1438) 00:10:40.074 9.425 - 9.484: 52.3559% ( 1163) 00:10:40.074 9.484 - 9.542: 59.5016% ( 734) 00:10:40.074 9.542 - 9.600: 63.2301% ( 383) 00:10:40.074 9.600 - 9.658: 65.7029% ( 254) 00:10:40.074 9.658 - 9.716: 68.2632% ( 263) 00:10:40.074 9.716 - 9.775: 69.9377% ( 172) 00:10:40.074 9.775 - 9.833: 71.0767% ( 117) 00:10:40.074 9.833 - 9.891: 71.6219% ( 56) 00:10:40.074 9.891 - 9.949: 71.8847% ( 27) 00:10:40.074 9.949 - 10.007: 72.0697% ( 19) 00:10:40.074 10.007 - 10.065: 72.1768% ( 11) 00:10:40.074 10.065 - 10.124: 72.3131% ( 14) 00:10:40.074 10.124 - 10.182: 72.4981% ( 19) 00:10:40.074 10.182 - 10.240: 72.9069% ( 42) 00:10:40.075 10.240 - 10.298: 73.6760% ( 79) 00:10:40.075 10.298 - 10.356: 74.5327% ( 88) 00:10:40.075 10.356 - 10.415: 75.5062% ( 100) 00:10:40.075 10.415 - 10.473: 76.1390% ( 65) 00:10:40.075 10.473 - 10.531: 76.6160% ( 49) 00:10:40.075 10.531 - 10.589: 76.9470% ( 34) 00:10:40.075 10.589 - 10.647: 77.1807% ( 24) 00:10:40.075 10.647 - 10.705: 77.3267% ( 15) 00:10:40.075 10.705 - 10.764: 77.4143% ( 9) 00:10:40.075 10.764 - 10.822: 77.5019% ( 9) 00:10:40.075 10.822 - 10.880: 77.5798% ( 8) 00:10:40.075 10.880 - 10.938: 77.6674% ( 9) 00:10:40.075 10.938 - 10.996: 77.8232% ( 16) 00:10:40.075 10.996 - 11.055: 77.9498% ( 13) 00:10:40.075 11.055 - 11.113: 78.0374% ( 9) 00:10:40.075 11.113 - 11.171: 78.1737% ( 14) 00:10:40.075 11.171 - 11.229: 78.9428% ( 79) 00:10:40.075 11.229 - 11.287: 80.8898% ( 200) 00:10:40.075 11.287 - 11.345: 83.3139% ( 249) 00:10:40.075 11.345 - 11.404: 85.5432% ( 229) 00:10:40.075 11.404 - 11.462: 87.1690% ( 167) 00:10:40.075 11.462 - 11.520: 87.9576% ( 81) 00:10:40.075 11.520 - 11.578: 88.4054% ( 46) 00:10:40.075 11.578 - 11.636: 88.6682% ( 27) 00:10:40.075 11.636 - 11.695: 88.8532% ( 19) 00:10:40.075 11.695 - 11.753: 88.9895% ( 14) 00:10:40.075 11.753 - 11.811: 89.1160% ( 13) 00:10:40.075 11.811 - 11.869: 89.1939% ( 8) 00:10:40.075 11.869 - 11.927: 89.2621% ( 7) 00:10:40.075 11.927 - 11.985: 89.3886% ( 13) 00:10:40.075 11.985 - 12.044: 89.5347% ( 15) 00:10:40.075 12.044 - 12.102: 89.6223% ( 9) 00:10:40.075 12.102 - 12.160: 89.7002% ( 8) 00:10:40.075 12.160 - 12.218: 89.9143% ( 22) 00:10:40.075 12.218 - 12.276: 90.1772% ( 27) 00:10:40.075 12.276 - 12.335: 90.4887% ( 32) 00:10:40.075 12.335 - 12.393: 90.6737% ( 19) 00:10:40.075 12.393 - 12.451: 91.0436% ( 38) 00:10:40.075 12.451 - 12.509: 91.1994% ( 16) 00:10:40.075 12.509 - 12.567: 91.3649% ( 17) 00:10:40.075 12.567 - 12.625: 91.4428% ( 8) 00:10:40.075 12.625 - 12.684: 91.4914% ( 5) 00:10:40.075 12.684 - 12.742: 91.5206% ( 3) 00:10:40.075 12.742 - 12.800: 91.5693% ( 5) 00:10:40.075 12.800 - 12.858: 91.5790% ( 1) 00:10:40.075 12.858 - 12.916: 91.5888% ( 1) 00:10:40.075 12.916 - 12.975: 91.5985% ( 1) 00:10:40.075 12.975 - 13.033: 91.6277% ( 3) 00:10:40.075 13.033 - 13.091: 91.6861% ( 6) 00:10:40.075 13.091 - 13.149: 91.7543% ( 7) 00:10:40.075 13.149 - 13.207: 91.8030% ( 5) 00:10:40.075 13.207 - 13.265: 91.8419% ( 4) 00:10:40.075 13.265 - 13.324: 91.8906% ( 5) 00:10:40.075 13.324 - 13.382: 91.9100% ( 2) 00:10:40.075 13.382 - 13.440: 91.9587% ( 5) 00:10:40.075 13.440 - 13.498: 92.0561% ( 10) 00:10:40.075 13.498 - 13.556: 92.0853% ( 3) 00:10:40.075 13.556 - 13.615: 92.1340% ( 5) 00:10:40.075 13.615 - 13.673: 92.2021% ( 7) 00:10:40.075 13.673 - 13.731: 92.2800% ( 8) 00:10:40.075 13.731 - 13.789: 92.3871% ( 11) 00:10:40.075 13.789 - 13.847: 92.5039% ( 12) 00:10:40.075 13.847 - 13.905: 92.5818% ( 8) 00:10:40.075 13.905 - 13.964: 92.6986% ( 12) 00:10:40.075 13.964 - 14.022: 92.7570% ( 6) 00:10:40.075 14.022 - 14.080: 92.8738% ( 12) 00:10:40.075 14.080 - 14.138: 92.9712% ( 10) 00:10:40.075 14.138 - 14.196: 93.0491% ( 8) 00:10:40.075 14.196 - 14.255: 93.1075% ( 6) 00:10:40.075 14.255 - 14.313: 93.2048% ( 10) 00:10:40.075 14.313 - 14.371: 93.3022% ( 10) 00:10:40.075 14.371 - 14.429: 93.3509% ( 5) 00:10:40.075 14.429 - 14.487: 93.3898% ( 4) 00:10:40.075 14.487 - 14.545: 93.4579% ( 7) 00:10:40.075 14.545 - 14.604: 93.4969% ( 4) 00:10:40.075 14.604 - 14.662: 93.5164% ( 2) 00:10:40.075 14.662 - 14.720: 93.5358% ( 2) 00:10:40.075 14.720 - 14.778: 93.5748% ( 4) 00:10:40.075 14.778 - 14.836: 93.5942% ( 2) 00:10:40.075 14.836 - 14.895: 93.6137% ( 2) 00:10:40.075 14.895 - 15.011: 93.6332% ( 2) 00:10:40.075 15.011 - 15.127: 93.7013% ( 7) 00:10:40.075 15.127 - 15.244: 93.7500% ( 5) 00:10:40.075 15.244 - 15.360: 93.7792% ( 3) 00:10:40.075 15.360 - 15.476: 93.8571% ( 8) 00:10:40.075 15.476 - 15.593: 93.9447% ( 9) 00:10:40.075 15.593 - 15.709: 93.9739% ( 3) 00:10:40.075 15.709 - 15.825: 94.0031% ( 3) 00:10:40.075 15.825 - 15.942: 94.0615% ( 6) 00:10:40.075 15.942 - 16.058: 94.1005% ( 4) 00:10:40.075 16.058 - 16.175: 94.1686% ( 7) 00:10:40.075 16.175 - 16.291: 94.3146% ( 15) 00:10:40.075 16.291 - 16.407: 94.3925% ( 8) 00:10:40.075 16.407 - 16.524: 94.5093% ( 12) 00:10:40.075 16.524 - 16.640: 94.5775% ( 7) 00:10:40.075 16.640 - 16.756: 94.6456% ( 7) 00:10:40.075 16.756 - 16.873: 94.6943% ( 5) 00:10:40.075 16.873 - 16.989: 94.8111% ( 12) 00:10:40.075 16.989 - 17.105: 94.8890% ( 8) 00:10:40.075 17.105 - 17.222: 94.9474% ( 6) 00:10:40.075 17.222 - 17.338: 94.9766% ( 3) 00:10:40.075 17.338 - 17.455: 95.0740% ( 10) 00:10:40.075 17.455 - 17.571: 95.1519% ( 8) 00:10:40.075 17.571 - 17.687: 95.2492% ( 10) 00:10:40.075 17.687 - 17.804: 95.3758% ( 13) 00:10:40.075 17.804 - 17.920: 95.4342% ( 6) 00:10:40.075 17.920 - 18.036: 95.5121% ( 8) 00:10:40.075 18.036 - 18.153: 95.5900% ( 8) 00:10:40.075 18.153 - 18.269: 95.6289% ( 4) 00:10:40.075 18.269 - 18.385: 95.6484% ( 2) 00:10:40.075 18.385 - 18.502: 95.7165% ( 7) 00:10:40.075 18.502 - 18.618: 95.7847% ( 7) 00:10:40.075 18.618 - 18.735: 95.8041% ( 2) 00:10:40.075 18.735 - 18.851: 95.8431% ( 4) 00:10:40.075 18.851 - 18.967: 95.8723% ( 3) 00:10:40.075 18.967 - 19.084: 95.9015% ( 3) 00:10:40.075 19.084 - 19.200: 95.9112% ( 1) 00:10:40.075 19.200 - 19.316: 95.9307% ( 2) 00:10:40.075 19.316 - 19.433: 95.9404% ( 1) 00:10:40.075 19.433 - 19.549: 95.9599% ( 2) 00:10:40.075 19.549 - 19.665: 96.0086% ( 5) 00:10:40.075 19.665 - 19.782: 96.0280% ( 2) 00:10:40.075 19.782 - 19.898: 96.0670% ( 4) 00:10:40.075 19.898 - 20.015: 96.0864% ( 2) 00:10:40.075 20.015 - 20.131: 96.0962% ( 1) 00:10:40.075 20.247 - 20.364: 96.1059% ( 1) 00:10:40.075 20.364 - 20.480: 96.1157% ( 1) 00:10:40.075 20.480 - 20.596: 96.1254% ( 1) 00:10:40.075 20.596 - 20.713: 96.1546% ( 3) 00:10:40.075 20.713 - 20.829: 96.1741% ( 2) 00:10:40.075 20.829 - 20.945: 96.2130% ( 4) 00:10:40.075 20.945 - 21.062: 96.2519% ( 4) 00:10:40.075 21.062 - 21.178: 96.2812% ( 3) 00:10:40.075 21.178 - 21.295: 96.3006% ( 2) 00:10:40.075 21.295 - 21.411: 96.3298% ( 3) 00:10:40.075 21.411 - 21.527: 96.3396% ( 1) 00:10:40.075 21.527 - 21.644: 96.3882% ( 5) 00:10:40.075 21.644 - 21.760: 96.3980% ( 1) 00:10:40.075 21.876 - 21.993: 96.4077% ( 1) 00:10:40.075 21.993 - 22.109: 96.4174% ( 1) 00:10:40.075 22.109 - 22.225: 96.4369% ( 2) 00:10:40.075 22.342 - 22.458: 96.4564% ( 2) 00:10:40.075 22.458 - 22.575: 96.4759% ( 2) 00:10:40.075 22.575 - 22.691: 96.4856% ( 1) 00:10:40.075 22.691 - 22.807: 96.4953% ( 1) 00:10:40.075 23.040 - 23.156: 96.5148% ( 2) 00:10:40.075 23.273 - 23.389: 96.5245% ( 1) 00:10:40.075 23.389 - 23.505: 96.5537% ( 3) 00:10:40.075 23.505 - 23.622: 96.5635% ( 1) 00:10:40.075 23.622 - 23.738: 96.6121% ( 5) 00:10:40.075 23.738 - 23.855: 96.6608% ( 5) 00:10:40.075 23.855 - 23.971: 96.7874% ( 13) 00:10:40.075 23.971 - 24.087: 96.9529% ( 17) 00:10:40.075 24.087 - 24.204: 97.1963% ( 25) 00:10:40.075 24.204 - 24.320: 97.4202% ( 23) 00:10:40.075 24.320 - 24.436: 97.7512% ( 34) 00:10:40.075 24.436 - 24.553: 98.1016% ( 36) 00:10:40.075 24.553 - 24.669: 98.4424% ( 35) 00:10:40.075 24.669 - 24.785: 98.7442% ( 31) 00:10:40.075 24.785 - 24.902: 98.9778% ( 24) 00:10:40.075 24.902 - 25.018: 99.0849% ( 11) 00:10:40.075 25.018 - 25.135: 99.2017% ( 12) 00:10:40.075 25.135 - 25.251: 99.2699% ( 7) 00:10:40.075 25.251 - 25.367: 99.3575% ( 9) 00:10:40.075 25.367 - 25.484: 99.4062% ( 5) 00:10:40.075 25.484 - 25.600: 99.4840% ( 8) 00:10:40.075 25.600 - 25.716: 99.5327% ( 5) 00:10:40.075 25.716 - 25.833: 99.5424% ( 1) 00:10:40.075 25.833 - 25.949: 99.5619% ( 2) 00:10:40.075 25.949 - 26.065: 99.5814% ( 2) 00:10:40.075 26.065 - 26.182: 99.5911% ( 1) 00:10:40.075 26.182 - 26.298: 99.6009% ( 1) 00:10:40.075 26.298 - 26.415: 99.6106% ( 1) 00:10:40.075 26.531 - 26.647: 99.6301% ( 2) 00:10:40.075 26.647 - 26.764: 99.6398% ( 1) 00:10:40.075 26.880 - 26.996: 99.6495% ( 1) 00:10:40.075 26.996 - 27.113: 99.6593% ( 1) 00:10:40.075 27.229 - 27.345: 99.6690% ( 1) 00:10:40.075 27.695 - 27.811: 99.6885% ( 2) 00:10:40.075 27.927 - 28.044: 99.6982% ( 1) 00:10:40.075 28.509 - 28.625: 99.7079% ( 1) 00:10:40.075 28.858 - 28.975: 99.7177% ( 1) 00:10:40.075 28.975 - 29.091: 99.7274% ( 1) 00:10:40.075 29.207 - 29.324: 99.7469% ( 2) 00:10:40.075 29.324 - 29.440: 99.7566% ( 1) 00:10:40.075 29.789 - 30.022: 99.7761% ( 2) 00:10:40.075 30.487 - 30.720: 99.7956% ( 2) 00:10:40.075 30.953 - 31.185: 99.8053% ( 1) 00:10:40.076 31.185 - 31.418: 99.8248% ( 2) 00:10:40.076 31.418 - 31.651: 99.8345% ( 1) 00:10:40.076 31.884 - 32.116: 99.8540% ( 2) 00:10:40.076 32.349 - 32.582: 99.8832% ( 3) 00:10:40.076 32.582 - 32.815: 99.8929% ( 1) 00:10:40.076 32.815 - 33.047: 99.9026% ( 1) 00:10:40.076 36.305 - 36.538: 99.9124% ( 1) 00:10:40.076 36.538 - 36.771: 99.9221% ( 1) 00:10:40.076 37.236 - 37.469: 99.9319% ( 1) 00:10:40.076 39.098 - 39.331: 99.9416% ( 1) 00:10:40.076 39.331 - 39.564: 99.9513% ( 1) 00:10:40.076 40.262 - 40.495: 99.9611% ( 1) 00:10:40.076 41.193 - 41.425: 99.9708% ( 1) 00:10:40.076 41.658 - 41.891: 99.9805% ( 1) 00:10:40.076 63.767 - 64.233: 99.9903% ( 1) 00:10:40.076 83.316 - 83.782: 100.0000% ( 1) 00:10:40.076 00:10:40.076 00:10:40.076 real 0m1.300s 00:10:40.076 user 0m1.126s 00:10:40.076 sys 0m0.122s 00:10:40.076 19:11:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:40.076 19:11:17 -- common/autotest_common.sh@10 -- # set +x 00:10:40.076 ************************************ 00:10:40.076 END TEST nvme_overhead 00:10:40.076 ************************************ 00:10:40.076 19:11:17 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:40.076 19:11:17 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:10:40.076 19:11:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:40.076 19:11:17 -- common/autotest_common.sh@10 -- # set +x 00:10:40.076 ************************************ 00:10:40.076 START TEST nvme_arbitration 00:10:40.076 ************************************ 00:10:40.076 19:11:17 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:44.262 Initializing NVMe Controllers 00:10:44.262 Attached to 0000:00:06.0 00:10:44.262 Attached to 0000:00:07.0 00:10:44.262 Attached to 0000:00:09.0 00:10:44.262 Attached to 0000:00:08.0 00:10:44.262 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:44.262 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:44.262 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:44.262 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:44.262 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:44.262 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:44.262 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:44.262 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:44.262 Initialization complete. Launching workers. 00:10:44.262 Starting thread on core 1 with urgent priority queue 00:10:44.262 Starting thread on core 2 with urgent priority queue 00:10:44.262 Starting thread on core 3 with urgent priority queue 00:10:44.262 Starting thread on core 0 with urgent priority queue 00:10:44.262 QEMU NVMe Ctrl (12340 ) core 0: 661.33 IO/s 151.21 secs/100000 ios 00:10:44.262 QEMU NVMe Ctrl (12342 ) core 0: 661.33 IO/s 151.21 secs/100000 ios 00:10:44.262 QEMU NVMe Ctrl (12341 ) core 1: 832.00 IO/s 120.19 secs/100000 ios 00:10:44.262 QEMU NVMe Ctrl (12342 ) core 1: 832.00 IO/s 120.19 secs/100000 ios 00:10:44.262 QEMU NVMe Ctrl (12343 ) core 2: 576.00 IO/s 173.61 secs/100000 ios 00:10:44.262 QEMU NVMe Ctrl (12342 ) core 3: 618.67 IO/s 161.64 secs/100000 ios 00:10:44.262 ======================================================== 00:10:44.262 00:10:44.262 00:10:44.262 real 0m3.532s 00:10:44.262 user 0m9.605s 00:10:44.262 sys 0m0.163s 00:10:44.262 19:11:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:44.262 ************************************ 00:10:44.262 END TEST nvme_arbitration 00:10:44.262 ************************************ 00:10:44.262 19:11:20 -- common/autotest_common.sh@10 -- # set +x 00:10:44.262 19:11:20 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:44.262 19:11:20 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:10:44.262 19:11:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:44.262 19:11:20 -- common/autotest_common.sh@10 -- # set +x 00:10:44.262 ************************************ 00:10:44.262 START TEST nvme_single_aen 00:10:44.262 ************************************ 00:10:44.262 19:11:20 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:44.262 Asynchronous Event Request test 00:10:44.262 Attached to 0000:00:06.0 00:10:44.262 Attached to 0000:00:07.0 00:10:44.262 Attached to 0000:00:09.0 00:10:44.262 Attached to 0000:00:08.0 00:10:44.262 Reset controller to setup AER completions for this process 00:10:44.262 Registering asynchronous event callbacks... 00:10:44.262 Getting orig temperature thresholds of all controllers 00:10:44.262 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:44.262 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:44.262 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:44.262 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:44.262 Setting all controllers temperature threshold low to trigger AER 00:10:44.262 Waiting for all controllers temperature threshold to be set lower 00:10:44.262 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:44.262 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:44.262 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:44.262 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:44.262 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:44.262 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:44.262 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:44.262 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:44.262 Waiting for all controllers to trigger AER and reset threshold 00:10:44.262 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.262 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.262 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.262 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.262 Cleaning up... 00:10:44.262 ************************************ 00:10:44.262 END TEST nvme_single_aen 00:10:44.262 ************************************ 00:10:44.262 00:10:44.262 real 0m0.300s 00:10:44.262 user 0m0.115s 00:10:44.262 sys 0m0.137s 00:10:44.262 19:11:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:44.262 19:11:21 -- common/autotest_common.sh@10 -- # set +x 00:10:44.262 19:11:21 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:44.262 19:11:21 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:44.262 19:11:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:44.262 19:11:21 -- common/autotest_common.sh@10 -- # set +x 00:10:44.262 ************************************ 00:10:44.262 START TEST nvme_doorbell_aers 00:10:44.262 ************************************ 00:10:44.262 19:11:21 -- common/autotest_common.sh@1102 -- # nvme_doorbell_aers 00:10:44.262 19:11:21 -- nvme/nvme.sh@70 -- # bdfs=() 00:10:44.262 19:11:21 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:44.262 19:11:21 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:44.262 19:11:21 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:44.262 19:11:21 -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:44.262 19:11:21 -- common/autotest_common.sh@1496 -- # local bdfs 00:10:44.262 19:11:21 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:44.262 19:11:21 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:44.262 19:11:21 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:44.262 19:11:21 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:44.262 19:11:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:44.262 19:11:21 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:44.262 19:11:21 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:44.262 [2024-02-14 19:11:21.549513] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:10:54.282 Executing: test_write_invalid_db 00:10:54.282 Waiting for AER completion... 00:10:54.282 Failure: test_write_invalid_db 00:10:54.282 00:10:54.282 Executing: test_invalid_db_write_overflow_sq 00:10:54.282 Waiting for AER completion... 00:10:54.282 Failure: test_invalid_db_write_overflow_sq 00:10:54.282 00:10:54.282 Executing: test_invalid_db_write_overflow_cq 00:10:54.282 Waiting for AER completion... 00:10:54.282 Failure: test_invalid_db_write_overflow_cq 00:10:54.282 00:10:54.282 19:11:31 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:54.282 19:11:31 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:54.282 [2024-02-14 19:11:31.597670] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:04.256 Executing: test_write_invalid_db 00:11:04.256 Waiting for AER completion... 00:11:04.256 Failure: test_write_invalid_db 00:11:04.256 00:11:04.256 Executing: test_invalid_db_write_overflow_sq 00:11:04.256 Waiting for AER completion... 00:11:04.256 Failure: test_invalid_db_write_overflow_sq 00:11:04.256 00:11:04.256 Executing: test_invalid_db_write_overflow_cq 00:11:04.256 Waiting for AER completion... 00:11:04.256 Failure: test_invalid_db_write_overflow_cq 00:11:04.256 00:11:04.256 19:11:41 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:04.256 19:11:41 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:04.256 [2024-02-14 19:11:41.644609] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:14.229 Executing: test_write_invalid_db 00:11:14.229 Waiting for AER completion... 00:11:14.229 Failure: test_write_invalid_db 00:11:14.229 00:11:14.229 Executing: test_invalid_db_write_overflow_sq 00:11:14.229 Waiting for AER completion... 00:11:14.229 Failure: test_invalid_db_write_overflow_sq 00:11:14.229 00:11:14.229 Executing: test_invalid_db_write_overflow_cq 00:11:14.229 Waiting for AER completion... 00:11:14.229 Failure: test_invalid_db_write_overflow_cq 00:11:14.229 00:11:14.229 19:11:51 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:14.229 19:11:51 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:14.491 [2024-02-14 19:11:51.696925] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 Executing: test_write_invalid_db 00:11:24.475 Waiting for AER completion... 00:11:24.475 Failure: test_write_invalid_db 00:11:24.475 00:11:24.475 Executing: test_invalid_db_write_overflow_sq 00:11:24.475 Waiting for AER completion... 00:11:24.475 Failure: test_invalid_db_write_overflow_sq 00:11:24.475 00:11:24.475 Executing: test_invalid_db_write_overflow_cq 00:11:24.475 Waiting for AER completion... 00:11:24.475 Failure: test_invalid_db_write_overflow_cq 00:11:24.475 00:11:24.475 00:11:24.475 real 0m40.241s 00:11:24.475 user 0m33.601s 00:11:24.475 sys 0m6.255s 00:11:24.475 ************************************ 00:11:24.475 END TEST nvme_doorbell_aers 00:11:24.475 ************************************ 00:11:24.475 19:12:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:24.475 19:12:01 -- common/autotest_common.sh@10 -- # set +x 00:11:24.475 19:12:01 -- nvme/nvme.sh@97 -- # uname 00:11:24.475 19:12:01 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:24.475 19:12:01 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:24.475 19:12:01 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:11:24.475 19:12:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:24.475 19:12:01 -- common/autotest_common.sh@10 -- # set +x 00:11:24.475 ************************************ 00:11:24.475 START TEST nvme_multi_aen 00:11:24.475 ************************************ 00:11:24.475 19:12:01 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:24.475 [2024-02-14 19:12:01.764938] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 [2024-02-14 19:12:01.765238] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 [2024-02-14 19:12:01.765277] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 [2024-02-14 19:12:01.766918] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 [2024-02-14 19:12:01.766965] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 [2024-02-14 19:12:01.766990] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 [2024-02-14 19:12:01.768277] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 [2024-02-14 19:12:01.768322] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 [2024-02-14 19:12:01.768348] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 [2024-02-14 19:12:01.769905] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 [2024-02-14 19:12:01.770152] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 [2024-02-14 19:12:01.770417] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65130) is not found. Dropping the request. 00:11:24.475 Child process pid: 65648 00:11:24.734 [Child] Asynchronous Event Request test 00:11:24.734 [Child] Attached to 0000:00:06.0 00:11:24.734 [Child] Attached to 0000:00:07.0 00:11:24.734 [Child] Attached to 0000:00:09.0 00:11:24.734 [Child] Attached to 0000:00:08.0 00:11:24.734 [Child] Registering asynchronous event callbacks... 00:11:24.734 [Child] Getting orig temperature thresholds of all controllers 00:11:24.734 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:24.734 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:24.734 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:24.734 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:24.734 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:24.734 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:24.734 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:24.734 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:24.734 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:24.734 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:24.734 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:24.734 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:24.734 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:24.734 [Child] Cleaning up... 00:11:24.734 Asynchronous Event Request test 00:11:24.734 Attached to 0000:00:06.0 00:11:24.734 Attached to 0000:00:07.0 00:11:24.734 Attached to 0000:00:09.0 00:11:24.734 Attached to 0000:00:08.0 00:11:24.734 Reset controller to setup AER completions for this process 00:11:24.734 Registering asynchronous event callbacks... 00:11:24.734 Getting orig temperature thresholds of all controllers 00:11:24.734 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:24.734 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:24.734 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:24.734 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:24.735 Setting all controllers temperature threshold low to trigger AER 00:11:24.735 Waiting for all controllers temperature threshold to be set lower 00:11:24.735 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:24.735 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:11:24.735 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:24.735 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:11:24.735 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:24.735 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:11:24.735 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:24.735 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:11:24.735 Waiting for all controllers to trigger AER and reset threshold 00:11:24.735 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:24.735 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:24.735 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:24.735 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:24.735 Cleaning up... 00:11:24.735 00:11:24.735 real 0m0.573s 00:11:24.735 user 0m0.232s 00:11:24.735 sys 0m0.245s 00:11:24.735 19:12:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:24.735 19:12:02 -- common/autotest_common.sh@10 -- # set +x 00:11:24.735 ************************************ 00:11:24.735 END TEST nvme_multi_aen 00:11:24.735 ************************************ 00:11:24.735 19:12:02 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:24.735 19:12:02 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:11:24.735 19:12:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:24.735 19:12:02 -- common/autotest_common.sh@10 -- # set +x 00:11:24.735 ************************************ 00:11:24.735 START TEST nvme_startup 00:11:24.735 ************************************ 00:11:24.735 19:12:02 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:24.993 Initializing NVMe Controllers 00:11:24.993 Attached to 0000:00:06.0 00:11:24.993 Attached to 0000:00:07.0 00:11:24.993 Attached to 0000:00:09.0 00:11:24.993 Attached to 0000:00:08.0 00:11:24.993 Initialization complete. 00:11:24.993 Time used:203140.344 (us). 00:11:25.253 ************************************ 00:11:25.253 END TEST nvme_startup 00:11:25.253 ************************************ 00:11:25.253 00:11:25.253 real 0m0.294s 00:11:25.253 user 0m0.113s 00:11:25.253 sys 0m0.135s 00:11:25.253 19:12:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:25.253 19:12:02 -- common/autotest_common.sh@10 -- # set +x 00:11:25.253 19:12:02 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:25.253 19:12:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:25.253 19:12:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:25.253 19:12:02 -- common/autotest_common.sh@10 -- # set +x 00:11:25.253 ************************************ 00:11:25.253 START TEST nvme_multi_secondary 00:11:25.253 ************************************ 00:11:25.253 19:12:02 -- common/autotest_common.sh@1102 -- # nvme_multi_secondary 00:11:25.253 19:12:02 -- nvme/nvme.sh@52 -- # pid0=65704 00:11:25.253 19:12:02 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:25.253 19:12:02 -- nvme/nvme.sh@54 -- # pid1=65705 00:11:25.253 19:12:02 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:25.253 19:12:02 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:29.461 Initializing NVMe Controllers 00:11:29.461 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:29.461 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:29.461 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:29.461 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:29.461 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:11:29.461 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:11:29.461 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:11:29.461 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:11:29.461 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:11:29.461 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:11:29.461 Initialization complete. Launching workers. 00:11:29.461 ======================================================== 00:11:29.461 Latency(us) 00:11:29.461 Device Information : IOPS MiB/s Average min max 00:11:29.461 PCIE (0000:00:06.0) NSID 1 from core 1: 5451.82 21.30 2940.39 1457.38 11157.86 00:11:29.461 PCIE (0000:00:07.0) NSID 1 from core 1: 5451.82 21.30 2942.06 1334.24 10710.66 00:11:29.461 PCIE (0000:00:09.0) NSID 1 from core 1: 5451.82 21.30 2941.96 1397.20 10211.74 00:11:29.461 PCIE (0000:00:08.0) NSID 1 from core 1: 5451.82 21.30 2941.85 1365.95 10092.64 00:11:29.461 PCIE (0000:00:08.0) NSID 2 from core 1: 5451.82 21.30 2941.91 1429.95 10191.88 00:11:29.461 PCIE (0000:00:08.0) NSID 3 from core 1: 5451.82 21.30 2941.79 1344.58 9832.05 00:11:29.461 ======================================================== 00:11:29.461 Total : 32710.92 127.78 2941.66 1334.24 11157.86 00:11:29.461 00:11:29.461 Initializing NVMe Controllers 00:11:29.461 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:29.461 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:29.461 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:29.461 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:29.461 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:11:29.461 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:11:29.461 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:11:29.461 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:11:29.461 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:11:29.461 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:11:29.461 Initialization complete. Launching workers. 00:11:29.461 ======================================================== 00:11:29.461 Latency(us) 00:11:29.461 Device Information : IOPS MiB/s Average min max 00:11:29.461 PCIE (0000:00:06.0) NSID 1 from core 2: 2462.53 9.62 6494.02 1703.72 13771.81 00:11:29.461 PCIE (0000:00:07.0) NSID 1 from core 2: 2462.53 9.62 6497.23 2057.67 14026.91 00:11:29.461 PCIE (0000:00:09.0) NSID 1 from core 2: 2462.53 9.62 6497.70 2031.95 13685.73 00:11:29.461 PCIE (0000:00:08.0) NSID 1 from core 2: 2462.53 9.62 6497.77 1878.01 13547.66 00:11:29.461 PCIE (0000:00:08.0) NSID 2 from core 2: 2462.53 9.62 6497.74 1997.92 12928.77 00:11:29.461 PCIE (0000:00:08.0) NSID 3 from core 2: 2462.53 9.62 6497.65 1972.50 12813.99 00:11:29.461 ======================================================== 00:11:29.461 Total : 14775.18 57.72 6497.02 1703.72 14026.91 00:11:29.461 00:11:29.461 19:12:06 -- nvme/nvme.sh@56 -- # wait 65704 00:11:30.395 Initializing NVMe Controllers 00:11:30.395 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:30.395 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:30.395 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:30.395 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:30.395 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:30.395 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:30.395 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:30.395 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:30.395 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:30.395 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:30.395 Initialization complete. Launching workers. 00:11:30.395 ======================================================== 00:11:30.395 Latency(us) 00:11:30.395 Device Information : IOPS MiB/s Average min max 00:11:30.395 PCIE (0000:00:06.0) NSID 1 from core 0: 8146.39 31.82 1962.54 961.96 5752.51 00:11:30.395 PCIE (0000:00:07.0) NSID 1 from core 0: 8146.39 31.82 1963.57 972.29 5762.00 00:11:30.395 PCIE (0000:00:09.0) NSID 1 from core 0: 8146.39 31.82 1963.51 896.73 6190.13 00:11:30.395 PCIE (0000:00:08.0) NSID 1 from core 0: 8146.39 31.82 1963.47 831.71 6606.07 00:11:30.395 PCIE (0000:00:08.0) NSID 2 from core 0: 8146.39 31.82 1963.42 806.65 6153.32 00:11:30.395 PCIE (0000:00:08.0) NSID 3 from core 0: 8146.39 31.82 1963.36 708.46 5762.46 00:11:30.395 ======================================================== 00:11:30.395 Total : 48878.36 190.93 1963.31 708.46 6606.07 00:11:30.395 00:11:30.653 19:12:07 -- nvme/nvme.sh@57 -- # wait 65705 00:11:30.653 19:12:07 -- nvme/nvme.sh@61 -- # pid0=65779 00:11:30.654 19:12:07 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:30.654 19:12:07 -- nvme/nvme.sh@63 -- # pid1=65780 00:11:30.654 19:12:07 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:30.654 19:12:07 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:33.939 Initializing NVMe Controllers 00:11:33.939 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:33.939 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:33.939 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:33.939 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:33.939 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:11:33.939 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:11:33.939 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:11:33.939 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:11:33.939 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:11:33.939 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:11:33.939 Initialization complete. Launching workers. 00:11:33.939 ======================================================== 00:11:33.939 Latency(us) 00:11:33.939 Device Information : IOPS MiB/s Average min max 00:11:33.939 PCIE (0000:00:06.0) NSID 1 from core 1: 5770.65 22.54 2770.93 995.87 6579.10 00:11:33.939 PCIE (0000:00:07.0) NSID 1 from core 1: 5770.65 22.54 2772.27 1029.83 6391.98 00:11:33.939 PCIE (0000:00:09.0) NSID 1 from core 1: 5770.65 22.54 2772.21 1037.50 6390.56 00:11:33.939 PCIE (0000:00:08.0) NSID 1 from core 1: 5770.65 22.54 2772.46 1014.54 6197.27 00:11:33.939 PCIE (0000:00:08.0) NSID 2 from core 1: 5775.99 22.56 2770.02 1035.63 6080.36 00:11:33.939 PCIE (0000:00:08.0) NSID 3 from core 1: 5775.99 22.56 2769.97 1033.87 6446.65 00:11:33.939 ======================================================== 00:11:33.939 Total : 34634.59 135.29 2771.31 995.87 6579.10 00:11:33.939 00:11:34.198 Initializing NVMe Controllers 00:11:34.198 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:34.198 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:34.198 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:34.198 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:34.198 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:34.198 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:34.198 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:34.198 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:34.198 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:34.198 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:34.198 Initialization complete. Launching workers. 00:11:34.198 ======================================================== 00:11:34.198 Latency(us) 00:11:34.198 Device Information : IOPS MiB/s Average min max 00:11:34.198 PCIE (0000:00:06.0) NSID 1 from core 0: 5554.64 21.70 2878.62 991.79 6152.08 00:11:34.198 PCIE (0000:00:07.0) NSID 1 from core 0: 5554.64 21.70 2879.77 1041.65 5960.41 00:11:34.198 PCIE (0000:00:09.0) NSID 1 from core 0: 5554.64 21.70 2879.63 986.87 6435.70 00:11:34.198 PCIE (0000:00:08.0) NSID 1 from core 0: 5554.64 21.70 2879.49 1003.02 5879.28 00:11:34.198 PCIE (0000:00:08.0) NSID 2 from core 0: 5554.64 21.70 2879.36 905.27 5654.14 00:11:34.198 PCIE (0000:00:08.0) NSID 3 from core 0: 5554.64 21.70 2879.25 863.17 6498.62 00:11:34.198 ======================================================== 00:11:34.198 Total : 33327.85 130.19 2879.35 863.17 6498.62 00:11:34.198 00:11:36.101 Initializing NVMe Controllers 00:11:36.101 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:36.101 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:36.101 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:36.101 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:36.101 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:11:36.101 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:11:36.101 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:11:36.101 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:11:36.101 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:11:36.101 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:11:36.101 Initialization complete. Launching workers. 00:11:36.101 ======================================================== 00:11:36.101 Latency(us) 00:11:36.101 Device Information : IOPS MiB/s Average min max 00:11:36.101 PCIE (0000:00:06.0) NSID 1 from core 2: 3695.81 14.44 4327.33 1039.56 14276.89 00:11:36.101 PCIE (0000:00:07.0) NSID 1 from core 2: 3695.81 14.44 4328.29 1012.26 14626.05 00:11:36.101 PCIE (0000:00:09.0) NSID 1 from core 2: 3695.81 14.44 4328.48 1033.77 14371.22 00:11:36.101 PCIE (0000:00:08.0) NSID 1 from core 2: 3695.81 14.44 4328.14 950.21 14854.03 00:11:36.101 PCIE (0000:00:08.0) NSID 2 from core 2: 3695.81 14.44 4327.65 875.30 14376.74 00:11:36.101 PCIE (0000:00:08.0) NSID 3 from core 2: 3695.81 14.44 4324.73 791.35 13956.13 00:11:36.101 ======================================================== 00:11:36.101 Total : 22174.86 86.62 4327.44 791.35 14854.03 00:11:36.101 00:11:36.101 ************************************ 00:11:36.101 END TEST nvme_multi_secondary 00:11:36.101 ************************************ 00:11:36.101 19:12:13 -- nvme/nvme.sh@65 -- # wait 65779 00:11:36.101 19:12:13 -- nvme/nvme.sh@66 -- # wait 65780 00:11:36.101 00:11:36.101 real 0m10.988s 00:11:36.101 user 0m19.107s 00:11:36.101 sys 0m0.877s 00:11:36.101 19:12:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:36.101 19:12:13 -- common/autotest_common.sh@10 -- # set +x 00:11:36.101 19:12:13 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:36.101 19:12:13 -- nvme/nvme.sh@102 -- # kill_stub 00:11:36.101 19:12:13 -- common/autotest_common.sh@1063 -- # [[ -e /proc/64702 ]] 00:11:36.101 19:12:13 -- common/autotest_common.sh@1064 -- # kill 64702 00:11:36.101 19:12:13 -- common/autotest_common.sh@1065 -- # wait 64702 00:11:37.039 [2024-02-14 19:12:14.116388] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:37.039 [2024-02-14 19:12:14.116504] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:37.039 [2024-02-14 19:12:14.116530] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:37.039 [2024-02-14 19:12:14.116592] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:37.297 [2024-02-14 19:12:14.630755] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:37.297 [2024-02-14 19:12:14.630857] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:37.297 [2024-02-14 19:12:14.630885] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:37.297 [2024-02-14 19:12:14.630906] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:37.864 [2024-02-14 19:12:15.149655] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:37.864 [2024-02-14 19:12:15.149745] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:37.865 [2024-02-14 19:12:15.149771] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:37.865 [2024-02-14 19:12:15.149793] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:38.801 [2024-02-14 19:12:16.174108] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:38.801 [2024-02-14 19:12:16.174204] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:38.801 [2024-02-14 19:12:16.174229] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:38.801 [2024-02-14 19:12:16.174253] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65647) is not found. Dropping the request. 00:11:39.060 19:12:16 -- common/autotest_common.sh@1067 -- # rm -f /var/run/spdk_stub0 00:11:39.060 19:12:16 -- common/autotest_common.sh@1071 -- # echo 2 00:11:39.060 19:12:16 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:39.060 19:12:16 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:39.060 19:12:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:39.060 19:12:16 -- common/autotest_common.sh@10 -- # set +x 00:11:39.060 ************************************ 00:11:39.060 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:39.060 ************************************ 00:11:39.060 19:12:16 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:39.319 * Looking for test storage... 00:11:39.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:39.319 19:12:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:39.319 19:12:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:39.319 19:12:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:39.319 19:12:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:39.319 19:12:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:39.319 19:12:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:39.319 19:12:16 -- common/autotest_common.sh@1507 -- # bdfs=() 00:11:39.319 19:12:16 -- common/autotest_common.sh@1507 -- # local bdfs 00:11:39.319 19:12:16 -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:11:39.319 19:12:16 -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:11:39.319 19:12:16 -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:39.319 19:12:16 -- common/autotest_common.sh@1496 -- # local bdfs 00:11:39.319 19:12:16 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:39.319 19:12:16 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:39.319 19:12:16 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:39.319 19:12:16 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:39.319 19:12:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:39.319 19:12:16 -- common/autotest_common.sh@1510 -- # echo 0000:00:06.0 00:11:39.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.319 19:12:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:11:39.319 19:12:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:11:39.319 19:12:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65958 00:11:39.319 19:12:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:39.319 19:12:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:39.319 19:12:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65958 00:11:39.319 19:12:16 -- common/autotest_common.sh@817 -- # '[' -z 65958 ']' 00:11:39.319 19:12:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.319 19:12:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:39.319 19:12:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.319 19:12:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:39.319 19:12:16 -- common/autotest_common.sh@10 -- # set +x 00:11:39.319 [2024-02-14 19:12:16.730011] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:11:39.319 [2024-02-14 19:12:16.730364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65958 ] 00:11:39.578 [2024-02-14 19:12:16.919283] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.837 [2024-02-14 19:12:17.134565] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:39.837 [2024-02-14 19:12:17.135269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.837 [2024-02-14 19:12:17.135375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.837 [2024-02-14 19:12:17.135554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.837 [2024-02-14 19:12:17.135570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.266 19:12:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:41.266 19:12:18 -- common/autotest_common.sh@850 -- # return 0 00:11:41.266 19:12:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:11:41.266 19:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:41.266 19:12:18 -- common/autotest_common.sh@10 -- # set +x 00:11:41.266 nvme0n1 00:11:41.266 19:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:41.266 19:12:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:41.266 19:12:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_JPjt7.txt 00:11:41.266 19:12:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:41.266 19:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:41.266 19:12:18 -- common/autotest_common.sh@10 -- # set +x 00:11:41.266 true 00:11:41.266 19:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:41.266 19:12:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:41.266 19:12:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1707937938 00:11:41.266 19:12:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65994 00:11:41.266 19:12:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:41.266 19:12:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:41.266 19:12:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:43.170 19:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.170 19:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:43.170 [2024-02-14 19:12:20.484570] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:43.170 [2024-02-14 19:12:20.484926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:43.170 [2024-02-14 19:12:20.484963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:43.170 [2024-02-14 19:12:20.484982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.170 [2024-02-14 19:12:20.486834] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:43.170 19:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65994 00:11:43.170 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65994 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65994 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:43.170 19:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.170 19:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:43.170 19:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_JPjt7.txt 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:43.170 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:43.429 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_JPjt7.txt 00:11:43.430 19:12:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65958 00:11:43.430 19:12:20 -- common/autotest_common.sh@924 -- # '[' -z 65958 ']' 00:11:43.430 19:12:20 -- common/autotest_common.sh@928 -- # kill -0 65958 00:11:43.430 19:12:20 -- common/autotest_common.sh@929 -- # uname 00:11:43.430 19:12:20 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:43.430 19:12:20 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 65958 00:11:43.430 killing process with pid 65958 00:11:43.430 19:12:20 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:43.430 19:12:20 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:43.430 19:12:20 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 65958' 00:11:43.430 19:12:20 -- common/autotest_common.sh@943 -- # kill 65958 00:11:43.430 19:12:20 -- common/autotest_common.sh@948 -- # wait 65958 00:11:45.332 19:12:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:45.332 19:12:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:45.332 00:11:45.332 real 0m6.264s 00:11:45.332 user 0m22.106s 00:11:45.332 sys 0m0.619s 00:11:45.332 ************************************ 00:11:45.332 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:45.332 ************************************ 00:11:45.332 19:12:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:45.332 19:12:22 -- common/autotest_common.sh@10 -- # set +x 00:11:45.592 19:12:22 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:45.592 19:12:22 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:45.592 19:12:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:45.592 19:12:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:45.592 19:12:22 -- common/autotest_common.sh@10 -- # set +x 00:11:45.592 ************************************ 00:11:45.592 START TEST nvme_fio 00:11:45.592 ************************************ 00:11:45.592 19:12:22 -- common/autotest_common.sh@1102 -- # nvme_fio_test 00:11:45.592 19:12:22 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:45.592 19:12:22 -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:45.592 19:12:22 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:45.592 19:12:22 -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:45.592 19:12:22 -- common/autotest_common.sh@1496 -- # local bdfs 00:11:45.592 19:12:22 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:45.592 19:12:22 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:45.592 19:12:22 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:45.592 19:12:22 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:45.592 19:12:22 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:45.592 19:12:22 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:11:45.592 19:12:22 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:45.592 19:12:22 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:45.592 19:12:22 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:45.592 19:12:22 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:45.851 19:12:23 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:45.851 19:12:23 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:46.108 19:12:23 -- nvme/nvme.sh@41 -- # bs=4096 00:11:46.109 19:12:23 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:46.109 19:12:23 -- common/autotest_common.sh@1337 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:46.109 19:12:23 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:11:46.109 19:12:23 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:46.109 19:12:23 -- common/autotest_common.sh@1316 -- # local sanitizers 00:11:46.109 19:12:23 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:46.109 19:12:23 -- common/autotest_common.sh@1318 -- # shift 00:11:46.109 19:12:23 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:11:46.109 19:12:23 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:11:46.109 19:12:23 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:46.109 19:12:23 -- common/autotest_common.sh@1322 -- # grep libasan 00:11:46.109 19:12:23 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:11:46.109 19:12:23 -- common/autotest_common.sh@1322 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:46.109 19:12:23 -- common/autotest_common.sh@1323 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:46.109 19:12:23 -- common/autotest_common.sh@1324 -- # break 00:11:46.109 19:12:23 -- common/autotest_common.sh@1329 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:46.109 19:12:23 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:46.367 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:46.367 fio-3.35 00:11:46.367 Starting 1 thread 00:11:48.895 00:11:48.896 test: (groupid=0, jobs=1): err= 0: pid=66140: Wed Feb 14 19:12:26 2024 00:11:48.896 read: IOPS=16.0k, BW=62.3MiB/s (65.4MB/s)(125MiB/2001msec) 00:11:48.896 slat (nsec): min=4348, max=69164, avg=6043.56, stdev=2002.32 00:11:48.896 clat (usec): min=479, max=10777, avg=3986.36, stdev=608.50 00:11:48.896 lat (usec): min=484, max=10812, avg=3992.41, stdev=609.30 00:11:48.896 clat percentiles (usec): 00:11:48.896 | 1.00th=[ 3163], 5.00th=[ 3556], 10.00th=[ 3621], 20.00th=[ 3687], 00:11:48.896 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3884], 00:11:48.896 | 70.00th=[ 3949], 80.00th=[ 4113], 90.00th=[ 4555], 95.00th=[ 4686], 00:11:48.896 | 99.00th=[ 7242], 99.50th=[ 7767], 99.90th=[ 8586], 99.95th=[ 9241], 00:11:48.896 | 99.99th=[10552] 00:11:48.896 bw ( KiB/s): min=53184, max=68240, per=98.97%, avg=63176.00, stdev=8653.59, samples=3 00:11:48.896 iops : min=13296, max=17060, avg=15794.00, stdev=2163.40, samples=3 00:11:48.896 write: IOPS=16.0k, BW=62.5MiB/s (65.5MB/s)(125MiB/2001msec); 0 zone resets 00:11:48.896 slat (nsec): min=4361, max=56725, avg=6339.26, stdev=2038.93 00:11:48.896 clat (usec): min=231, max=10597, avg=3996.29, stdev=617.28 00:11:48.896 lat (usec): min=240, max=10610, avg=4002.63, stdev=618.15 00:11:48.896 clat percentiles (usec): 00:11:48.896 | 1.00th=[ 3097], 5.00th=[ 3556], 10.00th=[ 3654], 20.00th=[ 3720], 00:11:48.896 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3884], 00:11:48.896 | 70.00th=[ 3949], 80.00th=[ 4113], 90.00th=[ 4555], 95.00th=[ 4752], 00:11:48.896 | 99.00th=[ 7242], 99.50th=[ 7767], 99.90th=[ 8586], 99.95th=[ 9110], 00:11:48.896 | 99.99th=[10290] 00:11:48.896 bw ( KiB/s): min=53496, max=67736, per=98.28%, avg=62874.67, stdev=8123.98, samples=3 00:11:48.896 iops : min=13374, max=16934, avg=15718.67, stdev=2031.00, samples=3 00:11:48.896 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:48.896 lat (msec) : 2=0.06%, 4=74.52%, 10=25.36%, 20=0.02% 00:11:48.896 cpu : usr=98.85%, sys=0.15%, ctx=21, majf=0, minf=605 00:11:48.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:48.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.896 issued rwts: total=31934,32003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.896 00:11:48.896 Run status group 0 (all jobs): 00:11:48.896 READ: bw=62.3MiB/s (65.4MB/s), 62.3MiB/s-62.3MiB/s (65.4MB/s-65.4MB/s), io=125MiB (131MB), run=2001-2001msec 00:11:48.896 WRITE: bw=62.5MiB/s (65.5MB/s), 62.5MiB/s-62.5MiB/s (65.5MB/s-65.5MB/s), io=125MiB (131MB), run=2001-2001msec 00:11:49.155 ----------------------------------------------------- 00:11:49.155 Suppressions used: 00:11:49.155 count bytes template 00:11:49.155 1 32 /usr/src/fio/parse.c 00:11:49.155 1 8 libtcmalloc_minimal.so 00:11:49.155 ----------------------------------------------------- 00:11:49.155 00:11:49.155 19:12:26 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:49.155 19:12:26 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:49.155 19:12:26 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:49.155 19:12:26 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:49.413 19:12:26 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:49.413 19:12:26 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:49.672 19:12:26 -- nvme/nvme.sh@41 -- # bs=4096 00:11:49.672 19:12:26 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:49.672 19:12:26 -- common/autotest_common.sh@1337 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:49.672 19:12:26 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:11:49.672 19:12:26 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:49.672 19:12:26 -- common/autotest_common.sh@1316 -- # local sanitizers 00:11:49.672 19:12:26 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:49.672 19:12:26 -- common/autotest_common.sh@1318 -- # shift 00:11:49.672 19:12:26 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:11:49.672 19:12:26 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:11:49.672 19:12:26 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:49.672 19:12:26 -- common/autotest_common.sh@1322 -- # grep libasan 00:11:49.672 19:12:26 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:11:49.672 19:12:26 -- common/autotest_common.sh@1322 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:49.672 19:12:26 -- common/autotest_common.sh@1323 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:49.672 19:12:26 -- common/autotest_common.sh@1324 -- # break 00:11:49.672 19:12:26 -- common/autotest_common.sh@1329 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:49.672 19:12:26 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:49.931 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:49.931 fio-3.35 00:11:49.931 Starting 1 thread 00:11:53.226 00:11:53.226 test: (groupid=0, jobs=1): err= 0: pid=66201: Wed Feb 14 19:12:30 2024 00:11:53.226 read: IOPS=17.1k, BW=66.9MiB/s (70.2MB/s)(134MiB/2001msec) 00:11:53.226 slat (nsec): min=4175, max=55983, avg=5721.78, stdev=1789.42 00:11:53.226 clat (usec): min=313, max=8724, avg=3709.13, stdev=521.60 00:11:53.226 lat (usec): min=319, max=8773, avg=3714.86, stdev=522.34 00:11:53.226 clat percentiles (usec): 00:11:53.226 | 1.00th=[ 2999], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3392], 00:11:53.226 | 30.00th=[ 3425], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3621], 00:11:53.226 | 70.00th=[ 3949], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4359], 00:11:53.226 | 99.00th=[ 5932], 99.50th=[ 6718], 99.90th=[ 7308], 99.95th=[ 7439], 00:11:53.226 | 99.99th=[ 8586] 00:11:53.226 bw ( KiB/s): min=64248, max=75232, per=100.00%, avg=70557.33, stdev=5671.52, samples=3 00:11:53.226 iops : min=16062, max=18808, avg=17639.33, stdev=1417.88, samples=3 00:11:53.226 write: IOPS=17.2k, BW=67.0MiB/s (70.3MB/s)(134MiB/2001msec); 0 zone resets 00:11:53.226 slat (nsec): min=4260, max=53060, avg=5872.11, stdev=1816.55 00:11:53.226 clat (usec): min=288, max=8618, avg=3727.04, stdev=527.85 00:11:53.226 lat (usec): min=294, max=8666, avg=3732.91, stdev=528.59 00:11:53.226 clat percentiles (usec): 00:11:53.226 | 1.00th=[ 3032], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3392], 00:11:53.226 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3621], 00:11:53.226 | 70.00th=[ 3982], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4424], 00:11:53.226 | 99.00th=[ 6063], 99.50th=[ 6718], 99.90th=[ 7308], 99.95th=[ 7504], 00:11:53.226 | 99.99th=[ 8291] 00:11:53.226 bw ( KiB/s): min=64640, max=74712, per=100.00%, avg=70490.67, stdev=5229.95, samples=3 00:11:53.226 iops : min=16160, max=18678, avg=17622.67, stdev=1307.49, samples=3 00:11:53.226 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:53.226 lat (msec) : 2=0.05%, 4=72.32%, 10=27.60% 00:11:53.226 cpu : usr=99.05%, sys=0.05%, ctx=3, majf=0, minf=606 00:11:53.226 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:53.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.226 issued rwts: total=34291,34332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.226 00:11:53.226 Run status group 0 (all jobs): 00:11:53.226 READ: bw=66.9MiB/s (70.2MB/s), 66.9MiB/s-66.9MiB/s (70.2MB/s-70.2MB/s), io=134MiB (140MB), run=2001-2001msec 00:11:53.226 WRITE: bw=67.0MiB/s (70.3MB/s), 67.0MiB/s-67.0MiB/s (70.3MB/s-70.3MB/s), io=134MiB (141MB), run=2001-2001msec 00:11:53.494 ----------------------------------------------------- 00:11:53.494 Suppressions used: 00:11:53.494 count bytes template 00:11:53.494 1 32 /usr/src/fio/parse.c 00:11:53.494 1 8 libtcmalloc_minimal.so 00:11:53.494 ----------------------------------------------------- 00:11:53.494 00:11:53.494 19:12:30 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:53.494 19:12:30 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:53.494 19:12:30 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:53.494 19:12:30 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:53.753 19:12:31 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:53.753 19:12:31 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:54.011 19:12:31 -- nvme/nvme.sh@41 -- # bs=4096 00:11:54.011 19:12:31 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:54.011 19:12:31 -- common/autotest_common.sh@1337 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:54.011 19:12:31 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:11:54.011 19:12:31 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:54.011 19:12:31 -- common/autotest_common.sh@1316 -- # local sanitizers 00:11:54.011 19:12:31 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:54.011 19:12:31 -- common/autotest_common.sh@1318 -- # shift 00:11:54.011 19:12:31 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:11:54.012 19:12:31 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:11:54.012 19:12:31 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:54.012 19:12:31 -- common/autotest_common.sh@1322 -- # grep libasan 00:11:54.012 19:12:31 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:11:54.012 19:12:31 -- common/autotest_common.sh@1322 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:54.012 19:12:31 -- common/autotest_common.sh@1323 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:54.012 19:12:31 -- common/autotest_common.sh@1324 -- # break 00:11:54.012 19:12:31 -- common/autotest_common.sh@1329 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:54.012 19:12:31 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:54.270 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:54.270 fio-3.35 00:11:54.270 Starting 1 thread 00:11:57.556 00:11:57.556 test: (groupid=0, jobs=1): err= 0: pid=66267: Wed Feb 14 19:12:34 2024 00:11:57.556 read: IOPS=15.6k, BW=61.1MiB/s (64.0MB/s)(122MiB/2001msec) 00:11:57.556 slat (nsec): min=4457, max=84130, avg=6161.60, stdev=2115.26 00:11:57.556 clat (usec): min=246, max=8214, avg=4070.06, stdev=553.66 00:11:57.556 lat (usec): min=253, max=8220, avg=4076.22, stdev=554.41 00:11:57.556 clat percentiles (usec): 00:11:57.556 | 1.00th=[ 3326], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3687], 00:11:57.556 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 4015], 60.00th=[ 4146], 00:11:57.556 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 5145], 00:11:57.556 | 99.00th=[ 6521], 99.50th=[ 7177], 99.90th=[ 7570], 99.95th=[ 7701], 00:11:57.556 | 99.99th=[ 7898] 00:11:57.556 bw ( KiB/s): min=58912, max=65104, per=97.96%, avg=61266.67, stdev=3351.71, samples=3 00:11:57.556 iops : min=14728, max=16276, avg=15316.67, stdev=837.93, samples=3 00:11:57.556 write: IOPS=15.6k, BW=61.1MiB/s (64.1MB/s)(122MiB/2001msec); 0 zone resets 00:11:57.556 slat (nsec): min=4637, max=50013, avg=6318.12, stdev=2113.17 00:11:57.556 clat (usec): min=278, max=8224, avg=4081.12, stdev=561.93 00:11:57.556 lat (usec): min=286, max=8230, avg=4087.44, stdev=562.72 00:11:57.556 clat percentiles (usec): 00:11:57.556 | 1.00th=[ 3326], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3687], 00:11:57.556 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 4015], 60.00th=[ 4146], 00:11:57.556 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 5145], 00:11:57.556 | 99.00th=[ 6587], 99.50th=[ 7177], 99.90th=[ 7570], 99.95th=[ 7701], 00:11:57.556 | 99.99th=[ 8094] 00:11:57.556 bw ( KiB/s): min=58912, max=64200, per=97.13%, avg=60802.67, stdev=2948.44, samples=3 00:11:57.556 iops : min=14728, max=16050, avg=15200.67, stdev=737.11, samples=3 00:11:57.556 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:11:57.556 lat (msec) : 2=0.06%, 4=48.70%, 10=51.20% 00:11:57.557 cpu : usr=98.90%, sys=0.15%, ctx=8, majf=0, minf=605 00:11:57.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:57.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:57.557 issued rwts: total=31288,31315,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:57.557 00:11:57.557 Run status group 0 (all jobs): 00:11:57.557 READ: bw=61.1MiB/s (64.0MB/s), 61.1MiB/s-61.1MiB/s (64.0MB/s-64.0MB/s), io=122MiB (128MB), run=2001-2001msec 00:11:57.557 WRITE: bw=61.1MiB/s (64.1MB/s), 61.1MiB/s-61.1MiB/s (64.1MB/s-64.1MB/s), io=122MiB (128MB), run=2001-2001msec 00:11:57.557 ----------------------------------------------------- 00:11:57.557 Suppressions used: 00:11:57.557 count bytes template 00:11:57.557 1 32 /usr/src/fio/parse.c 00:11:57.557 1 8 libtcmalloc_minimal.so 00:11:57.557 ----------------------------------------------------- 00:11:57.557 00:11:57.557 19:12:34 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:57.557 19:12:34 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:57.557 19:12:34 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:57.557 19:12:34 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:57.816 19:12:35 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:57.816 19:12:35 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:58.075 19:12:35 -- nvme/nvme.sh@41 -- # bs=4096 00:11:58.075 19:12:35 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:58.075 19:12:35 -- common/autotest_common.sh@1337 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:58.075 19:12:35 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:11:58.075 19:12:35 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:58.075 19:12:35 -- common/autotest_common.sh@1316 -- # local sanitizers 00:11:58.075 19:12:35 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:58.075 19:12:35 -- common/autotest_common.sh@1318 -- # shift 00:11:58.075 19:12:35 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:11:58.075 19:12:35 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:11:58.075 19:12:35 -- common/autotest_common.sh@1322 -- # grep libasan 00:11:58.075 19:12:35 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:58.075 19:12:35 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:11:58.334 19:12:35 -- common/autotest_common.sh@1322 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:58.334 19:12:35 -- common/autotest_common.sh@1323 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:58.334 19:12:35 -- common/autotest_common.sh@1324 -- # break 00:11:58.334 19:12:35 -- common/autotest_common.sh@1329 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:58.334 19:12:35 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:58.334 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:58.334 fio-3.35 00:11:58.334 Starting 1 thread 00:12:03.604 00:12:03.604 test: (groupid=0, jobs=1): err= 0: pid=66333: Wed Feb 14 19:12:40 2024 00:12:03.604 read: IOPS=17.1k, BW=67.0MiB/s (70.2MB/s)(134MiB/2001msec) 00:12:03.604 slat (usec): min=4, max=147, avg= 5.72, stdev= 1.87 00:12:03.604 clat (usec): min=287, max=9231, avg=3713.34, stdev=485.75 00:12:03.604 lat (usec): min=293, max=9271, avg=3719.05, stdev=486.28 00:12:03.604 clat percentiles (usec): 00:12:03.604 | 1.00th=[ 2671], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3392], 00:12:03.604 | 30.00th=[ 3458], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3687], 00:12:03.604 | 70.00th=[ 3884], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4490], 00:12:03.604 | 99.00th=[ 4883], 99.50th=[ 5473], 99.90th=[ 7635], 99.95th=[ 7963], 00:12:03.604 | 99.99th=[ 8979] 00:12:03.604 bw ( KiB/s): min=65504, max=71488, per=100.00%, avg=68634.67, stdev=3001.62, samples=3 00:12:03.604 iops : min=16376, max=17872, avg=17158.67, stdev=750.41, samples=3 00:12:03.604 write: IOPS=17.2k, BW=67.1MiB/s (70.3MB/s)(134MiB/2001msec); 0 zone resets 00:12:03.604 slat (nsec): min=4646, max=58006, avg=5853.17, stdev=1802.01 00:12:03.604 clat (usec): min=352, max=9039, avg=3717.21, stdev=486.31 00:12:03.604 lat (usec): min=359, max=9056, avg=3723.06, stdev=486.81 00:12:03.604 clat percentiles (usec): 00:12:03.604 | 1.00th=[ 2638], 5.00th=[ 3163], 10.00th=[ 3261], 20.00th=[ 3392], 00:12:03.604 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3687], 00:12:03.604 | 70.00th=[ 3851], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4490], 00:12:03.604 | 99.00th=[ 4948], 99.50th=[ 5604], 99.90th=[ 7701], 99.95th=[ 8029], 00:12:03.604 | 99.99th=[ 8979] 00:12:03.604 bw ( KiB/s): min=65032, max=71480, per=99.67%, avg=68469.33, stdev=3245.11, samples=3 00:12:03.604 iops : min=16258, max=17870, avg=17117.33, stdev=811.28, samples=3 00:12:03.604 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:03.604 lat (msec) : 2=0.07%, 4=73.89%, 10=26.01% 00:12:03.604 cpu : usr=98.35%, sys=0.40%, ctx=25, majf=0, minf=603 00:12:03.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:03.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:03.604 issued rwts: total=34315,34365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:03.604 00:12:03.604 Run status group 0 (all jobs): 00:12:03.604 READ: bw=67.0MiB/s (70.2MB/s), 67.0MiB/s-67.0MiB/s (70.2MB/s-70.2MB/s), io=134MiB (141MB), run=2001-2001msec 00:12:03.604 WRITE: bw=67.1MiB/s (70.3MB/s), 67.1MiB/s-67.1MiB/s (70.3MB/s-70.3MB/s), io=134MiB (141MB), run=2001-2001msec 00:12:03.604 ----------------------------------------------------- 00:12:03.604 Suppressions used: 00:12:03.604 count bytes template 00:12:03.604 1 32 /usr/src/fio/parse.c 00:12:03.604 1 8 libtcmalloc_minimal.so 00:12:03.604 ----------------------------------------------------- 00:12:03.604 00:12:03.604 19:12:40 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:03.604 19:12:40 -- nvme/nvme.sh@46 -- # true 00:12:03.604 00:12:03.604 real 0m17.712s 00:12:03.604 user 0m13.618s 00:12:03.604 sys 0m3.967s 00:12:03.604 ************************************ 00:12:03.604 END TEST nvme_fio 00:12:03.604 ************************************ 00:12:03.604 19:12:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:03.604 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.604 ************************************ 00:12:03.604 END TEST nvme 00:12:03.604 ************************************ 00:12:03.604 00:12:03.604 real 1m34.929s 00:12:03.604 user 3m48.006s 00:12:03.604 sys 0m16.478s 00:12:03.604 19:12:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:03.604 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.604 19:12:40 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:12:03.604 19:12:40 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:03.604 19:12:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:03.604 19:12:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:03.604 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.604 ************************************ 00:12:03.604 START TEST nvme_scc 00:12:03.604 ************************************ 00:12:03.604 19:12:40 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:03.604 * Looking for test storage... 00:12:03.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:03.604 19:12:40 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:03.604 19:12:40 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:03.604 19:12:40 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:03.604 19:12:40 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:03.604 19:12:40 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:03.604 19:12:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.604 19:12:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.604 19:12:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.604 19:12:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.604 19:12:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.604 19:12:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.604 19:12:40 -- paths/export.sh@5 -- # export PATH 00:12:03.604 19:12:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.605 19:12:40 -- nvme/functions.sh@10 -- # ctrls=() 00:12:03.605 19:12:40 -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:03.605 19:12:40 -- nvme/functions.sh@11 -- # nvmes=() 00:12:03.605 19:12:40 -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:03.605 19:12:40 -- nvme/functions.sh@12 -- # bdfs=() 00:12:03.605 19:12:40 -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:03.605 19:12:40 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:03.605 19:12:40 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:03.605 19:12:40 -- nvme/functions.sh@14 -- # nvme_name= 00:12:03.605 19:12:40 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:03.605 19:12:40 -- nvme/nvme_scc.sh@12 -- # uname 00:12:03.605 19:12:40 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:03.605 19:12:40 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:03.605 19:12:40 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:03.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:03.863 Waiting for block devices as requested 00:12:03.863 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.122 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.122 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.122 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:09.392 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:09.392 19:12:46 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:09.392 19:12:46 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:09.392 19:12:46 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:09.392 19:12:46 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:12:09.392 19:12:46 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:12:09.392 19:12:46 -- scripts/common.sh@15 -- # local i 00:12:09.392 19:12:46 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:12:09.392 19:12:46 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:09.392 19:12:46 -- scripts/common.sh@24 -- # return 0 00:12:09.392 19:12:46 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:09.392 19:12:46 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:09.392 19:12:46 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@18 -- # shift 00:12:09.392 19:12:46 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.392 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:09.392 19:12:46 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:09.392 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.393 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:09.393 19:12:46 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.393 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.394 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:09.394 19:12:46 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:09.394 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:09.395 19:12:46 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:09.395 19:12:46 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:09.395 19:12:46 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:12:09.395 19:12:46 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:09.395 19:12:46 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:09.395 19:12:46 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:12:09.395 19:12:46 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:12:09.395 19:12:46 -- scripts/common.sh@15 -- # local i 00:12:09.395 19:12:46 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:12:09.395 19:12:46 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:09.395 19:12:46 -- scripts/common.sh@24 -- # return 0 00:12:09.395 19:12:46 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:09.395 19:12:46 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:09.395 19:12:46 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@18 -- # shift 00:12:09.395 19:12:46 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.395 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:09.395 19:12:46 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:09.395 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.396 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.396 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:09.396 19:12:46 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.397 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.397 19:12:46 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:09.397 19:12:46 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.398 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.398 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.398 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.398 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.398 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.398 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.398 19:12:46 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.398 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.398 19:12:46 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:09.398 19:12:46 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.398 19:12:46 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:09.398 19:12:46 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:09.398 19:12:46 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:09.398 19:12:46 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:09.398 19:12:46 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:09.398 19:12:46 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:09.398 19:12:46 -- nvme/functions.sh@18 -- # shift 00:12:09.398 19:12:46 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.398 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.398 19:12:46 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:09.661 19:12:46 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.662 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.662 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.662 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:09.663 19:12:46 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:09.663 19:12:46 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:12:09.663 19:12:46 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:12:09.663 19:12:46 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@18 -- # shift 00:12:09.663 19:12:46 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.663 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:12:09.663 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.663 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.664 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.664 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:09.664 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:12:09.665 19:12:46 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:09.665 19:12:46 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:12:09.665 19:12:46 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:12:09.665 19:12:46 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@18 -- # shift 00:12:09.665 19:12:46 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.665 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.665 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.665 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:12:09.666 19:12:46 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:09.666 19:12:46 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:09.666 19:12:46 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:12:09.666 19:12:46 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:09.666 19:12:46 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:09.666 19:12:46 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:12:09.666 19:12:46 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:12:09.666 19:12:46 -- scripts/common.sh@15 -- # local i 00:12:09.666 19:12:46 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:12:09.666 19:12:46 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:09.666 19:12:46 -- scripts/common.sh@24 -- # return 0 00:12:09.666 19:12:46 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:09.666 19:12:46 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:09.666 19:12:46 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@18 -- # shift 00:12:09.666 19:12:46 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.666 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:09.666 19:12:46 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.666 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.667 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:09.667 19:12:46 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:09.667 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.668 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.668 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:09.668 19:12:46 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.669 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:09.669 19:12:46 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.669 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:09.670 19:12:46 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:09.670 19:12:46 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:09.670 19:12:46 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:09.670 19:12:46 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@18 -- # shift 00:12:09.670 19:12:46 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:09.670 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.670 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.670 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:46 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:09.671 19:12:46 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:09.671 19:12:47 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:09.671 19:12:47 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:09.671 19:12:47 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:12:09.671 19:12:47 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:09.671 19:12:47 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:09.671 19:12:47 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:09.671 19:12:47 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:12:09.671 19:12:47 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:12:09.671 19:12:47 -- scripts/common.sh@15 -- # local i 00:12:09.671 19:12:47 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:12:09.671 19:12:47 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:09.671 19:12:47 -- scripts/common.sh@24 -- # return 0 00:12:09.671 19:12:47 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:09.671 19:12:47 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:09.671 19:12:47 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@18 -- # shift 00:12:09.671 19:12:47 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:09.671 19:12:47 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:09.671 19:12:47 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.671 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.671 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:09.672 19:12:47 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.672 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.672 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.673 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:09.673 19:12:47 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:09.673 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.674 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:09.674 19:12:47 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.674 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.675 19:12:47 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:09.675 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:09.675 19:12:47 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.675 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:09.675 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:09.675 19:12:47 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.675 19:12:47 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:09.675 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:09.675 19:12:47 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.675 19:12:47 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:09.675 19:12:47 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:09.675 19:12:47 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:12:09.675 19:12:47 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:12:09.675 19:12:47 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:12:09.675 19:12:47 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:12:09.675 19:12:47 -- nvme/functions.sh@18 -- # shift 00:12:09.675 19:12:47 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.675 19:12:47 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:12:09.675 19:12:47 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.675 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:09.675 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:12:09.675 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.675 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:09.675 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:12:09.675 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.675 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.675 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:09.675 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:12:09.935 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.935 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.935 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:09.936 19:12:47 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # IFS=: 00:12:09.936 19:12:47 -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.936 19:12:47 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:12:09.936 19:12:47 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:09.936 19:12:47 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:09.936 19:12:47 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:12:09.936 19:12:47 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:09.936 19:12:47 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:09.936 19:12:47 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:09.936 19:12:47 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:12:09.936 19:12:47 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:09.936 19:12:47 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:12:09.936 19:12:47 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:09.936 19:12:47 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:12:09.936 19:12:47 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:12:09.936 19:12:47 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:09.936 19:12:47 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:12:09.936 19:12:47 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:12:09.936 19:12:47 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:12:09.936 19:12:47 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:12:09.936 19:12:47 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:09.936 19:12:47 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:09.936 19:12:47 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:09.936 19:12:47 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:09.936 19:12:47 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:09.936 19:12:47 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:09.936 19:12:47 -- nvme/functions.sh@197 -- # echo nvme1 00:12:09.936 19:12:47 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:09.936 19:12:47 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:12:09.936 19:12:47 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:12:09.936 19:12:47 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:12:09.936 19:12:47 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:12:09.936 19:12:47 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:09.936 19:12:47 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:09.936 19:12:47 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:09.936 19:12:47 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:09.936 19:12:47 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:09.936 19:12:47 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:09.936 19:12:47 -- nvme/functions.sh@197 -- # echo nvme0 00:12:09.936 19:12:47 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:09.936 19:12:47 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:12:09.936 19:12:47 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:12:09.936 19:12:47 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:12:09.936 19:12:47 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:12:09.936 19:12:47 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:09.936 19:12:47 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:09.936 19:12:47 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:09.936 19:12:47 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:09.936 19:12:47 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:09.937 19:12:47 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:09.937 19:12:47 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:09.937 19:12:47 -- nvme/functions.sh@197 -- # echo nvme3 00:12:09.937 19:12:47 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:09.937 19:12:47 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:12:09.937 19:12:47 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:12:09.937 19:12:47 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:12:09.937 19:12:47 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:12:09.937 19:12:47 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:09.937 19:12:47 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:09.937 19:12:47 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:09.937 19:12:47 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:09.937 19:12:47 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:09.937 19:12:47 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:09.937 19:12:47 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:09.937 19:12:47 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:09.937 19:12:47 -- nvme/functions.sh@197 -- # echo nvme2 00:12:09.937 19:12:47 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:12:09.937 19:12:47 -- nvme/functions.sh@206 -- # echo nvme1 00:12:09.937 19:12:47 -- nvme/functions.sh@207 -- # return 0 00:12:09.937 19:12:47 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:09.937 19:12:47 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:12:09.937 19:12:47 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:10.873 lsblk: /dev/nvme0c0n1: not a block device 00:12:10.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:11.132 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.132 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.132 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.132 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.132 19:12:48 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:12:11.132 19:12:48 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:12:11.132 19:12:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:11.132 19:12:48 -- common/autotest_common.sh@10 -- # set +x 00:12:11.132 ************************************ 00:12:11.132 START TEST nvme_simple_copy 00:12:11.132 ************************************ 00:12:11.132 19:12:48 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:12:11.699 Initializing NVMe Controllers 00:12:11.699 Attaching to 0000:00:08.0 00:12:11.699 Controller supports SCC. Attached to 0000:00:08.0 00:12:11.699 Namespace ID: 1 size: 4GB 00:12:11.699 Initialization complete. 00:12:11.699 00:12:11.699 Controller QEMU NVMe Ctrl (12342 ) 00:12:11.699 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:11.699 Namespace Block Size:4096 00:12:11.699 Writing LBAs 0 to 63 with Random Data 00:12:11.699 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:11.699 LBAs matching Written Data: 64 00:12:11.699 00:12:11.699 real 0m0.310s 00:12:11.699 user 0m0.131s 00:12:11.699 sys 0m0.075s 00:12:11.699 19:12:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:11.699 19:12:48 -- common/autotest_common.sh@10 -- # set +x 00:12:11.700 ************************************ 00:12:11.700 END TEST nvme_simple_copy 00:12:11.700 ************************************ 00:12:11.700 00:12:11.700 real 0m8.318s 00:12:11.700 user 0m1.475s 00:12:11.700 sys 0m1.772s 00:12:11.700 19:12:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:11.700 19:12:48 -- common/autotest_common.sh@10 -- # set +x 00:12:11.700 ************************************ 00:12:11.700 END TEST nvme_scc 00:12:11.700 ************************************ 00:12:11.700 19:12:48 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:12:11.700 19:12:48 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:12:11.700 19:12:48 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:12:11.700 19:12:48 -- spdk/autotest.sh@238 -- # [[ 1 -eq 1 ]] 00:12:11.700 19:12:48 -- spdk/autotest.sh@239 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:11.700 19:12:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:11.700 19:12:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:11.700 19:12:48 -- common/autotest_common.sh@10 -- # set +x 00:12:11.700 ************************************ 00:12:11.700 START TEST nvme_fdp 00:12:11.700 ************************************ 00:12:11.700 19:12:48 -- common/autotest_common.sh@1102 -- # test/nvme/nvme_fdp.sh 00:12:11.700 * Looking for test storage... 00:12:11.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:11.700 19:12:49 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:11.700 19:12:49 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:11.700 19:12:49 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:11.700 19:12:49 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:11.700 19:12:49 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:11.700 19:12:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.700 19:12:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.700 19:12:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.700 19:12:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.700 19:12:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.700 19:12:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.700 19:12:49 -- paths/export.sh@5 -- # export PATH 00:12:11.700 19:12:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.700 19:12:49 -- nvme/functions.sh@10 -- # ctrls=() 00:12:11.700 19:12:49 -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:11.700 19:12:49 -- nvme/functions.sh@11 -- # nvmes=() 00:12:11.700 19:12:49 -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:11.700 19:12:49 -- nvme/functions.sh@12 -- # bdfs=() 00:12:11.700 19:12:49 -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:11.700 19:12:49 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:11.700 19:12:49 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:11.700 19:12:49 -- nvme/functions.sh@14 -- # nvme_name= 00:12:11.700 19:12:49 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:11.700 19:12:49 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:12.267 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:12.267 Waiting for block devices as requested 00:12:12.267 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.267 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.525 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.525 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.914 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:17.914 19:12:54 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:17.914 19:12:54 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:17.914 19:12:54 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:17.914 19:12:54 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:17.914 19:12:54 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:12:17.914 19:12:54 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:12:17.914 19:12:54 -- scripts/common.sh@15 -- # local i 00:12:17.914 19:12:54 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:12:17.914 19:12:54 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:17.914 19:12:54 -- scripts/common.sh@24 -- # return 0 00:12:17.914 19:12:54 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:17.914 19:12:54 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:17.914 19:12:54 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:17.914 19:12:54 -- nvme/functions.sh@18 -- # shift 00:12:17.914 19:12:54 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:17.914 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.914 19:12:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:17.914 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.914 19:12:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.914 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.914 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.914 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:17.914 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:17.914 19:12:54 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:17.914 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.914 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.914 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:17.914 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:17.914 19:12:54 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:17.914 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.914 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.914 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:17.914 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:12:17.914 19:12:54 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:12:17.914 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.914 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.914 19:12:54 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:17.914 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:17.914 19:12:54 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:17.915 19:12:54 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.915 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.915 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.916 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.916 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.916 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:17.917 19:12:54 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:17.917 19:12:54 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:17.917 19:12:54 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:17.917 19:12:54 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:12:17.917 19:12:54 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:17.917 19:12:54 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:17.917 19:12:54 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:12:17.917 19:12:54 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:12:17.917 19:12:54 -- scripts/common.sh@15 -- # local i 00:12:17.917 19:12:54 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:12:17.917 19:12:54 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:17.917 19:12:54 -- scripts/common.sh@24 -- # return 0 00:12:17.917 19:12:54 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:17.917 19:12:54 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:17.917 19:12:54 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@18 -- # shift 00:12:17.917 19:12:54 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:17.917 19:12:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:54 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:17.917 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:17.917 19:12:55 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:17.917 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:17.917 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:17.917 19:12:55 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:17.917 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:17.917 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:12:17.917 19:12:55 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:12:17.917 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:55 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:17.917 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:17.917 19:12:55 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:17.917 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:17.917 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:17.917 19:12:55 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:17.917 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.917 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:17.917 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:17.917 19:12:55 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:17.917 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.917 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.918 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.918 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.918 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.919 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.919 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.919 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:17.920 19:12:55 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:17.920 19:12:55 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:17.920 19:12:55 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:17.920 19:12:55 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@18 -- # shift 00:12:17.920 19:12:55 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:17.920 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.920 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.920 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.921 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:17.921 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.921 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:17.922 19:12:55 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:17.922 19:12:55 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:12:17.922 19:12:55 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:12:17.922 19:12:55 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@18 -- # shift 00:12:17.922 19:12:55 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:12:17.922 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.922 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.922 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:12:17.923 19:12:55 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:17.923 19:12:55 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:12:17.923 19:12:55 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:12:17.923 19:12:55 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@18 -- # shift 00:12:17.923 19:12:55 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.923 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.923 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.923 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.924 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:17.924 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:17.924 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:12:17.925 19:12:55 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:17.925 19:12:55 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:17.925 19:12:55 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:12:17.925 19:12:55 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:17.925 19:12:55 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:17.925 19:12:55 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:12:17.925 19:12:55 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:12:17.925 19:12:55 -- scripts/common.sh@15 -- # local i 00:12:17.925 19:12:55 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:12:17.925 19:12:55 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:17.925 19:12:55 -- scripts/common.sh@24 -- # return 0 00:12:17.925 19:12:55 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:17.925 19:12:55 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:17.925 19:12:55 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@18 -- # shift 00:12:17.925 19:12:55 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:17.925 19:12:55 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.925 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.925 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:17.926 19:12:55 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.926 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.926 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:17.927 19:12:55 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:17.927 19:12:55 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:17.927 19:12:55 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:17.927 19:12:55 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:17.927 19:12:55 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:17.927 19:12:55 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@18 -- # shift 00:12:17.927 19:12:55 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.927 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.927 19:12:55 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.928 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:17.928 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.928 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:17.929 19:12:55 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:17.929 19:12:55 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:17.929 19:12:55 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:12:17.929 19:12:55 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:17.929 19:12:55 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:17.929 19:12:55 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:12:17.929 19:12:55 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:12:17.929 19:12:55 -- scripts/common.sh@15 -- # local i 00:12:17.929 19:12:55 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:12:17.929 19:12:55 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:17.929 19:12:55 -- scripts/common.sh@24 -- # return 0 00:12:17.929 19:12:55 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:17.929 19:12:55 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:17.929 19:12:55 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@18 -- # shift 00:12:17.929 19:12:55 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.929 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:17.929 19:12:55 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.929 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.930 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:17.930 19:12:55 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:17.930 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.931 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.931 19:12:55 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:17.931 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:17.932 19:12:55 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:17.932 19:12:55 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:12:17.932 19:12:55 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:12:17.932 19:12:55 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@18 -- # shift 00:12:17.932 19:12:55 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:12:17.932 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.932 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.932 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.933 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:12:17.933 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:12:17.933 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.933 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.933 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.933 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:12:17.933 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:12:17.933 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:17.933 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.933 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.933 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:12:17.933 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:12:17.933 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:18.192 19:12:55 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # IFS=: 00:12:18.192 19:12:55 -- nvme/functions.sh@21 -- # read -r reg val 00:12:18.192 19:12:55 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:12:18.192 19:12:55 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:18.192 19:12:55 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:18.192 19:12:55 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:12:18.192 19:12:55 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:18.192 19:12:55 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:18.192 19:12:55 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:18.192 19:12:55 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:12:18.192 19:12:55 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:18.192 19:12:55 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:12:18.192 19:12:55 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:18.192 19:12:55 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:12:18.192 19:12:55 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:12:18.192 19:12:55 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:18.193 19:12:55 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:18.193 19:12:55 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:12:18.193 19:12:55 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:12:18.193 19:12:55 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:12:18.193 19:12:55 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:12:18.193 19:12:55 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:18.193 19:12:55 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:18.193 19:12:55 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:18.193 19:12:55 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:18.193 19:12:55 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:18.193 19:12:55 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:18.193 19:12:55 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:18.193 19:12:55 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:18.193 19:12:55 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:18.193 19:12:55 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:12:18.193 19:12:55 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:12:18.193 19:12:55 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:12:18.193 19:12:55 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:12:18.193 19:12:55 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:18.193 19:12:55 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:18.193 19:12:55 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:18.193 19:12:55 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:18.193 19:12:55 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:18.193 19:12:55 -- nvme/functions.sh@76 -- # echo 0x88010 00:12:18.193 19:12:55 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:12:18.193 19:12:55 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:18.193 19:12:55 -- nvme/functions.sh@197 -- # echo nvme0 00:12:18.193 19:12:55 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:18.193 19:12:55 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:12:18.193 19:12:55 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:12:18.193 19:12:55 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:12:18.193 19:12:55 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:12:18.193 19:12:55 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:18.193 19:12:55 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:18.193 19:12:55 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:18.193 19:12:55 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:18.193 19:12:55 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:18.193 19:12:55 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:18.193 19:12:55 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:18.193 19:12:55 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:18.193 19:12:55 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:18.193 19:12:55 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:12:18.193 19:12:55 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:12:18.193 19:12:55 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:12:18.193 19:12:55 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:12:18.193 19:12:55 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:18.193 19:12:55 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:18.193 19:12:55 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:18.193 19:12:55 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:18.193 19:12:55 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:18.193 19:12:55 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:18.193 19:12:55 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:18.193 19:12:55 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:18.193 19:12:55 -- nvme/functions.sh@204 -- # trap - ERR 00:12:18.193 19:12:55 -- nvme/functions.sh@204 -- # print_backtrace 00:12:18.193 19:12:55 -- common/autotest_common.sh@1130 -- # [[ hxBET =~ e ]] 00:12:18.193 19:12:55 -- common/autotest_common.sh@1130 -- # return 0 00:12:18.193 19:12:55 -- nvme/functions.sh@204 -- # trap - ERR 00:12:18.193 19:12:55 -- nvme/functions.sh@204 -- # print_backtrace 00:12:18.193 19:12:55 -- common/autotest_common.sh@1130 -- # [[ hxBET =~ e ]] 00:12:18.193 19:12:55 -- common/autotest_common.sh@1130 -- # return 0 00:12:18.193 19:12:55 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:12:18.193 19:12:55 -- nvme/functions.sh@206 -- # echo nvme0 00:12:18.193 19:12:55 -- nvme/functions.sh@207 -- # return 0 00:12:18.193 19:12:55 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:12:18.193 19:12:55 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:12:18.193 19:12:55 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:19.127 lsblk: /dev/nvme0c0n1: not a block device 00:12:19.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:19.127 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:19.127 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:19.386 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:19.386 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:19.386 19:12:56 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:12:19.386 19:12:56 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:12:19.386 19:12:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:19.386 19:12:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.386 ************************************ 00:12:19.386 START TEST nvme_flexible_data_placement 00:12:19.386 ************************************ 00:12:19.386 19:12:56 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:12:19.645 Initializing NVMe Controllers 00:12:19.645 Attaching to 0000:00:09.0 00:12:19.645 Controller supports FDP Attached to 0000:00:09.0 00:12:19.645 Namespace ID: 1 Endurance Group ID: 1 00:12:19.645 Initialization complete. 00:12:19.645 00:12:19.645 ================================== 00:12:19.645 == FDP tests for Namespace: #01 == 00:12:19.645 ================================== 00:12:19.645 00:12:19.645 Get Feature: FDP: 00:12:19.645 ================= 00:12:19.645 Enabled: Yes 00:12:19.645 FDP configuration Index: 0 00:12:19.645 00:12:19.645 FDP configurations log page 00:12:19.645 =========================== 00:12:19.645 Number of FDP configurations: 1 00:12:19.645 Version: 0 00:12:19.645 Size: 112 00:12:19.645 FDP Configuration Descriptor: 0 00:12:19.645 Descriptor Size: 96 00:12:19.645 Reclaim Group Identifier format: 2 00:12:19.645 FDP Volatile Write Cache: Not Present 00:12:19.645 FDP Configuration: Valid 00:12:19.645 Vendor Specific Size: 0 00:12:19.645 Number of Reclaim Groups: 2 00:12:19.645 Number of Recalim Unit Handles: 8 00:12:19.645 Max Placement Identifiers: 128 00:12:19.645 Number of Namespaces Suppprted: 256 00:12:19.645 Reclaim unit Nominal Size: 6000000 bytes 00:12:19.645 Estimated Reclaim Unit Time Limit: Not Reported 00:12:19.645 RUH Desc #000: RUH Type: Initially Isolated 00:12:19.645 RUH Desc #001: RUH Type: Initially Isolated 00:12:19.645 RUH Desc #002: RUH Type: Initially Isolated 00:12:19.645 RUH Desc #003: RUH Type: Initially Isolated 00:12:19.645 RUH Desc #004: RUH Type: Initially Isolated 00:12:19.645 RUH Desc #005: RUH Type: Initially Isolated 00:12:19.645 RUH Desc #006: RUH Type: Initially Isolated 00:12:19.645 RUH Desc #007: RUH Type: Initially Isolated 00:12:19.645 00:12:19.645 FDP reclaim unit handle usage log page 00:12:19.645 ====================================== 00:12:19.645 Number of Reclaim Unit Handles: 8 00:12:19.645 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:19.645 RUH Usage Desc #001: RUH Attributes: Unused 00:12:19.645 RUH Usage Desc #002: RUH Attributes: Unused 00:12:19.645 RUH Usage Desc #003: RUH Attributes: Unused 00:12:19.645 RUH Usage Desc #004: RUH Attributes: Unused 00:12:19.645 RUH Usage Desc #005: RUH Attributes: Unused 00:12:19.645 RUH Usage Desc #006: RUH Attributes: Unused 00:12:19.645 RUH Usage Desc #007: RUH Attributes: Unused 00:12:19.645 00:12:19.645 FDP statistics log page 00:12:19.645 ======================= 00:12:19.645 Host bytes with metadata written: 769331200 00:12:19.645 Media bytes with metadata written: 769466368 00:12:19.645 Media bytes erased: 0 00:12:19.645 00:12:19.645 FDP Reclaim unit handle status 00:12:19.645 ============================== 00:12:19.645 Number of RUHS descriptors: 2 00:12:19.645 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000224f 00:12:19.645 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:19.645 00:12:19.645 FDP write on placement id: 0 success 00:12:19.645 00:12:19.645 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:19.645 00:12:19.645 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:19.645 00:12:19.645 Get Feature: FDP Events for Placement handle: #0 00:12:19.645 ======================== 00:12:19.645 Number of FDP Events: 6 00:12:19.645 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:19.645 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:19.645 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:19.645 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:19.645 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:19.645 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:19.645 00:12:19.645 FDP events log page 00:12:19.645 =================== 00:12:19.645 Number of FDP events: 1 00:12:19.645 FDP Event #0: 00:12:19.645 Event Type: RU Not Written to Capacity 00:12:19.645 Placement Identifier: Valid 00:12:19.645 NSID: Valid 00:12:19.645 Location: Valid 00:12:19.645 Placement Identifier: 0 00:12:19.645 Event Timestamp: c 00:12:19.645 Namespace Identifier: 1 00:12:19.645 Reclaim Group Identifier: 0 00:12:19.645 Reclaim Unit Handle Identifier: 0 00:12:19.645 00:12:19.645 FDP test passed 00:12:19.645 00:12:19.645 real 0m0.281s 00:12:19.645 user 0m0.084s 00:12:19.645 sys 0m0.092s 00:12:19.645 19:12:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.645 19:12:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.645 ************************************ 00:12:19.645 END TEST nvme_flexible_data_placement 00:12:19.645 ************************************ 00:12:19.645 00:12:19.645 real 0m8.077s 00:12:19.645 user 0m1.354s 00:12:19.645 sys 0m1.763s 00:12:19.645 19:12:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.645 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:19.645 ************************************ 00:12:19.645 END TEST nvme_fdp 00:12:19.645 ************************************ 00:12:19.645 19:12:57 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:12:19.645 19:12:57 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:19.645 19:12:57 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:19.645 19:12:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:19.645 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:19.904 ************************************ 00:12:19.905 START TEST nvme_rpc 00:12:19.905 ************************************ 00:12:19.905 19:12:57 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:19.905 * Looking for test storage... 00:12:19.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:19.905 19:12:57 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.905 19:12:57 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:19.905 19:12:57 -- common/autotest_common.sh@1507 -- # bdfs=() 00:12:19.905 19:12:57 -- common/autotest_common.sh@1507 -- # local bdfs 00:12:19.905 19:12:57 -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:12:19.905 19:12:57 -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:12:19.905 19:12:57 -- common/autotest_common.sh@1496 -- # bdfs=() 00:12:19.905 19:12:57 -- common/autotest_common.sh@1496 -- # local bdfs 00:12:19.905 19:12:57 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:19.905 19:12:57 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:19.905 19:12:57 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:12:19.905 19:12:57 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:12:19.905 19:12:57 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:12:19.905 19:12:57 -- common/autotest_common.sh@1510 -- # echo 0000:00:06.0 00:12:19.905 19:12:57 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:12:19.905 19:12:57 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67791 00:12:19.905 19:12:57 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:19.905 19:12:57 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:19.905 19:12:57 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67791 00:12:19.905 19:12:57 -- common/autotest_common.sh@817 -- # '[' -z 67791 ']' 00:12:19.905 19:12:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.905 19:12:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:19.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.905 19:12:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.905 19:12:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:19.905 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.164 [2024-02-14 19:12:57.336326] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:20.164 [2024-02-14 19:12:57.336499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67791 ] 00:12:20.164 [2024-02-14 19:12:57.509831] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:20.423 [2024-02-14 19:12:57.739116] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:20.423 [2024-02-14 19:12:57.739571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.423 [2024-02-14 19:12:57.739741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.801 19:12:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:21.801 19:12:59 -- common/autotest_common.sh@850 -- # return 0 00:12:21.801 19:12:59 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:12:22.060 Nvme0n1 00:12:22.060 19:12:59 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:22.060 19:12:59 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:22.319 request: 00:12:22.319 { 00:12:22.319 "filename": "non_existing_file", 00:12:22.319 "bdev_name": "Nvme0n1", 00:12:22.319 "method": "bdev_nvme_apply_firmware", 00:12:22.319 "req_id": 1 00:12:22.319 } 00:12:22.319 Got JSON-RPC error response 00:12:22.319 response: 00:12:22.319 { 00:12:22.319 "code": -32603, 00:12:22.319 "message": "open file failed." 00:12:22.319 } 00:12:22.319 19:12:59 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:22.319 19:12:59 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:22.319 19:12:59 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:22.577 19:12:59 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:22.577 19:12:59 -- nvme/nvme_rpc.sh@40 -- # killprocess 67791 00:12:22.577 19:12:59 -- common/autotest_common.sh@924 -- # '[' -z 67791 ']' 00:12:22.577 19:12:59 -- common/autotest_common.sh@928 -- # kill -0 67791 00:12:22.577 19:12:59 -- common/autotest_common.sh@929 -- # uname 00:12:22.577 19:12:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:22.577 19:12:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 67791 00:12:22.577 19:12:59 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:22.577 19:12:59 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:22.577 killing process with pid 67791 00:12:22.577 19:12:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 67791' 00:12:22.577 19:12:59 -- common/autotest_common.sh@943 -- # kill 67791 00:12:22.577 19:12:59 -- common/autotest_common.sh@948 -- # wait 67791 00:12:24.480 00:12:24.480 real 0m4.672s 00:12:24.480 user 0m9.112s 00:12:24.480 sys 0m0.583s 00:12:24.480 19:13:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:24.480 19:13:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.480 ************************************ 00:12:24.480 END TEST nvme_rpc 00:12:24.480 ************************************ 00:12:24.480 19:13:01 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:24.480 19:13:01 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:24.480 19:13:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:24.480 19:13:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.480 ************************************ 00:12:24.480 START TEST nvme_rpc_timeouts 00:12:24.480 ************************************ 00:12:24.480 19:13:01 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:24.480 * Looking for test storage... 00:12:24.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:24.480 19:13:01 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.480 19:13:01 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67875 00:12:24.480 19:13:01 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67875 00:12:24.480 19:13:01 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67898 00:12:24.480 19:13:01 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:24.480 19:13:01 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:24.480 19:13:01 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67898 00:12:24.480 19:13:01 -- common/autotest_common.sh@817 -- # '[' -z 67898 ']' 00:12:24.480 19:13:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.480 19:13:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:24.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.480 19:13:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.480 19:13:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:24.480 19:13:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.738 [2024-02-14 19:13:01.981977] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:24.738 [2024-02-14 19:13:01.982133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67898 ] 00:12:24.738 [2024-02-14 19:13:02.151176] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:24.997 [2024-02-14 19:13:02.336308] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:24.997 [2024-02-14 19:13:02.336704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.997 [2024-02-14 19:13:02.336713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.377 19:13:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:26.377 19:13:03 -- common/autotest_common.sh@850 -- # return 0 00:12:26.377 Checking default timeout settings: 00:12:26.377 19:13:03 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:26.377 19:13:03 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:26.636 Making settings changes with rpc: 00:12:26.636 19:13:03 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:26.636 19:13:03 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:26.894 Check default vs. modified settings: 00:12:26.894 19:13:04 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:26.894 19:13:04 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67875 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67875 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:27.462 Setting action_on_timeout is changed as expected. 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67875 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67875 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:27.462 Setting timeout_us is changed as expected. 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67875 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67875 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:27.462 Setting timeout_admin_us is changed as expected. 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67875 /tmp/settings_modified_67875 00:12:27.462 19:13:04 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67898 00:12:27.462 19:13:04 -- common/autotest_common.sh@924 -- # '[' -z 67898 ']' 00:12:27.462 19:13:04 -- common/autotest_common.sh@928 -- # kill -0 67898 00:12:27.462 19:13:04 -- common/autotest_common.sh@929 -- # uname 00:12:27.462 19:13:04 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:27.462 19:13:04 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 67898 00:12:27.462 19:13:04 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:27.462 19:13:04 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:27.462 19:13:04 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 67898' 00:12:27.462 killing process with pid 67898 00:12:27.462 19:13:04 -- common/autotest_common.sh@943 -- # kill 67898 00:12:27.462 19:13:04 -- common/autotest_common.sh@948 -- # wait 67898 00:12:29.362 RPC TIMEOUT SETTING TEST PASSED. 00:12:29.362 19:13:06 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:29.362 00:12:29.362 real 0m4.971s 00:12:29.362 user 0m9.903s 00:12:29.362 sys 0m0.560s 00:12:29.362 19:13:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:29.362 ************************************ 00:12:29.362 19:13:06 -- common/autotest_common.sh@10 -- # set +x 00:12:29.362 END TEST nvme_rpc_timeouts 00:12:29.362 ************************************ 00:12:29.620 19:13:06 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:12:29.620 19:13:06 -- spdk/autotest.sh@255 -- # [[ 1 -eq 1 ]] 00:12:29.620 19:13:06 -- spdk/autotest.sh@256 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:29.620 19:13:06 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:29.620 19:13:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:29.620 19:13:06 -- common/autotest_common.sh@10 -- # set +x 00:12:29.620 ************************************ 00:12:29.620 START TEST nvme_xnvme 00:12:29.620 ************************************ 00:12:29.620 19:13:06 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:29.620 * Looking for test storage... 00:12:29.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:29.620 19:13:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:29.620 19:13:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.620 19:13:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.620 19:13:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.620 19:13:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.620 19:13:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.620 19:13:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.620 19:13:06 -- paths/export.sh@5 -- # export PATH 00:12:29.620 19:13:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.620 19:13:06 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:29.620 19:13:06 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:29.620 19:13:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:29.620 19:13:06 -- common/autotest_common.sh@10 -- # set +x 00:12:29.620 ************************************ 00:12:29.620 START TEST xnvme_to_malloc_dd_copy 00:12:29.620 ************************************ 00:12:29.620 19:13:06 -- common/autotest_common.sh@1102 -- # malloc_to_xnvme_copy 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:29.621 19:13:06 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:29.621 19:13:06 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:29.621 19:13:06 -- dd/common.sh@191 -- # return 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@18 -- # local io 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:29.621 19:13:06 -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:29.621 19:13:06 -- dd/common.sh@31 -- # xtrace_disable 00:12:29.621 19:13:06 -- common/autotest_common.sh@10 -- # set +x 00:12:29.621 { 00:12:29.621 "subsystems": [ 00:12:29.621 { 00:12:29.621 "subsystem": "bdev", 00:12:29.621 "config": [ 00:12:29.621 { 00:12:29.621 "params": { 00:12:29.621 "block_size": 512, 00:12:29.621 "num_blocks": 2097152, 00:12:29.621 "name": "malloc0" 00:12:29.621 }, 00:12:29.621 "method": "bdev_malloc_create" 00:12:29.621 }, 00:12:29.621 { 00:12:29.621 "params": { 00:12:29.621 "io_mechanism": "libaio", 00:12:29.621 "filename": "/dev/nullb0", 00:12:29.621 "name": "null0" 00:12:29.621 }, 00:12:29.621 "method": "bdev_xnvme_create" 00:12:29.621 }, 00:12:29.621 { 00:12:29.621 "method": "bdev_wait_for_examine" 00:12:29.621 } 00:12:29.621 ] 00:12:29.621 } 00:12:29.621 ] 00:12:29.621 } 00:12:29.621 [2024-02-14 19:13:07.008546] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:29.621 [2024-02-14 19:13:07.008685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68037 ] 00:12:29.879 [2024-02-14 19:13:07.170923] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.137 [2024-02-14 19:13:07.396423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.137 [2024-02-14 19:13:07.396552] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:37.342  Copying: 171/1024 [MB] (171 MBps) Copying: 341/1024 [MB] (170 MBps) Copying: 518/1024 [MB] (176 MBps) Copying: 695/1024 [MB] (177 MBps) Copying: 872/1024 [MB] (176 MBps) Copying: 1024/1024 [MB] (average 174 MBps)[2024-02-14 19:13:14.517236] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:40.626 00:12:40.626 00:12:40.884 19:13:18 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:40.884 19:13:18 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:40.884 19:13:18 -- dd/common.sh@31 -- # xtrace_disable 00:12:40.884 19:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:40.884 { 00:12:40.884 "subsystems": [ 00:12:40.884 { 00:12:40.884 "subsystem": "bdev", 00:12:40.884 "config": [ 00:12:40.884 { 00:12:40.884 "params": { 00:12:40.884 "block_size": 512, 00:12:40.884 "num_blocks": 2097152, 00:12:40.884 "name": "malloc0" 00:12:40.884 }, 00:12:40.884 "method": "bdev_malloc_create" 00:12:40.884 }, 00:12:40.884 { 00:12:40.884 "params": { 00:12:40.884 "io_mechanism": "libaio", 00:12:40.884 "filename": "/dev/nullb0", 00:12:40.884 "name": "null0" 00:12:40.884 }, 00:12:40.884 "method": "bdev_xnvme_create" 00:12:40.884 }, 00:12:40.884 { 00:12:40.884 "method": "bdev_wait_for_examine" 00:12:40.884 } 00:12:40.884 ] 00:12:40.884 } 00:12:40.884 ] 00:12:40.884 } 00:12:40.884 [2024-02-14 19:13:18.167390] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:40.884 [2024-02-14 19:13:18.167574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68161 ] 00:12:41.143 [2024-02-14 19:13:18.341815] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.143 [2024-02-14 19:13:18.536457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.143 [2024-02-14 19:13:18.536569] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:48.791  Copying: 169/1024 [MB] (169 MBps) Copying: 337/1024 [MB] (168 MBps) Copying: 505/1024 [MB] (167 MBps) Copying: 671/1024 [MB] (165 MBps) Copying: 838/1024 [MB] (167 MBps) Copying: 1006/1024 [MB] (168 MBps) Copying: 1024/1024 [MB] (average 167 MBps)[2024-02-14 19:13:25.923632] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:52.071 00:12:52.071 00:12:52.071 19:13:29 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:52.071 19:13:29 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:52.072 19:13:29 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:52.072 19:13:29 -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:52.072 19:13:29 -- dd/common.sh@31 -- # xtrace_disable 00:12:52.072 19:13:29 -- common/autotest_common.sh@10 -- # set +x 00:12:52.072 { 00:12:52.072 "subsystems": [ 00:12:52.072 { 00:12:52.072 "subsystem": "bdev", 00:12:52.072 "config": [ 00:12:52.072 { 00:12:52.072 "params": { 00:12:52.072 "block_size": 512, 00:12:52.072 "num_blocks": 2097152, 00:12:52.072 "name": "malloc0" 00:12:52.072 }, 00:12:52.072 "method": "bdev_malloc_create" 00:12:52.072 }, 00:12:52.072 { 00:12:52.072 "params": { 00:12:52.072 "io_mechanism": "io_uring", 00:12:52.072 "filename": "/dev/nullb0", 00:12:52.072 "name": "null0" 00:12:52.072 }, 00:12:52.072 "method": "bdev_xnvme_create" 00:12:52.072 }, 00:12:52.072 { 00:12:52.072 "method": "bdev_wait_for_examine" 00:12:52.072 } 00:12:52.072 ] 00:12:52.072 } 00:12:52.072 ] 00:12:52.072 } 00:12:52.072 [2024-02-14 19:13:29.406643] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:52.072 [2024-02-14 19:13:29.406797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68291 ] 00:12:52.330 [2024-02-14 19:13:29.565224] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.330 [2024-02-14 19:13:29.744951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.330 [2024-02-14 19:13:29.745047] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:58.886  Copying: 191/1024 [MB] (191 MBps) Copying: 382/1024 [MB] (190 MBps) Copying: 574/1024 [MB] (191 MBps) Copying: 769/1024 [MB] (195 MBps) Copying: 966/1024 [MB] (197 MBps) Copying: 1024/1024 [MB] (average 193 MBps)[2024-02-14 19:13:36.195585] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:13:02.168 00:13:02.168 00:13:02.168 19:13:39 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:02.168 19:13:39 -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:02.168 19:13:39 -- dd/common.sh@31 -- # xtrace_disable 00:13:02.168 19:13:39 -- common/autotest_common.sh@10 -- # set +x 00:13:02.427 { 00:13:02.427 "subsystems": [ 00:13:02.427 { 00:13:02.427 "subsystem": "bdev", 00:13:02.427 "config": [ 00:13:02.427 { 00:13:02.427 "params": { 00:13:02.427 "block_size": 512, 00:13:02.427 "num_blocks": 2097152, 00:13:02.427 "name": "malloc0" 00:13:02.427 }, 00:13:02.427 "method": "bdev_malloc_create" 00:13:02.427 }, 00:13:02.427 { 00:13:02.427 "params": { 00:13:02.427 "io_mechanism": "io_uring", 00:13:02.427 "filename": "/dev/nullb0", 00:13:02.427 "name": "null0" 00:13:02.427 }, 00:13:02.427 "method": "bdev_xnvme_create" 00:13:02.427 }, 00:13:02.427 { 00:13:02.427 "method": "bdev_wait_for_examine" 00:13:02.427 } 00:13:02.427 ] 00:13:02.427 } 00:13:02.427 ] 00:13:02.427 } 00:13:02.427 [2024-02-14 19:13:39.615384] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:02.427 [2024-02-14 19:13:39.615540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68406 ] 00:13:02.427 [2024-02-14 19:13:39.774044] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.685 [2024-02-14 19:13:39.951643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.685 [2024-02-14 19:13:39.951761] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:13:09.101  Copying: 194/1024 [MB] (194 MBps) Copying: 391/1024 [MB] (197 MBps) Copying: 587/1024 [MB] (195 MBps) Copying: 782/1024 [MB] (195 MBps) Copying: 981/1024 [MB] (198 MBps) Copying: 1024/1024 [MB] (average 196 MBps)[2024-02-14 19:13:46.360905] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:13:12.384 00:13:12.384 00:13:12.384 19:13:49 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:12.384 19:13:49 -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:12.384 00:13:12.384 real 0m42.817s 00:13:12.384 user 0m37.549s 00:13:12.384 sys 0m4.650s 00:13:12.384 19:13:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:12.384 19:13:49 -- common/autotest_common.sh@10 -- # set +x 00:13:12.384 ************************************ 00:13:12.384 END TEST xnvme_to_malloc_dd_copy 00:13:12.384 ************************************ 00:13:12.384 19:13:49 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:12.384 19:13:49 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:12.384 19:13:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:12.384 19:13:49 -- common/autotest_common.sh@10 -- # set +x 00:13:12.384 ************************************ 00:13:12.384 START TEST xnvme_bdevperf 00:13:12.384 ************************************ 00:13:12.384 19:13:49 -- common/autotest_common.sh@1102 -- # xnvme_bdevperf 00:13:12.384 19:13:49 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:12.384 19:13:49 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:12.384 19:13:49 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:12.384 19:13:49 -- dd/common.sh@191 -- # return 00:13:12.384 19:13:49 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:12.384 19:13:49 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:12.384 19:13:49 -- xnvme/xnvme.sh@60 -- # local io 00:13:12.384 19:13:49 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:12.384 19:13:49 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:12.384 19:13:49 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:12.384 19:13:49 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:12.384 19:13:49 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:12.385 19:13:49 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:12.385 19:13:49 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:12.385 19:13:49 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:12.385 19:13:49 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:12.385 19:13:49 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:12.385 19:13:49 -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:12.385 19:13:49 -- dd/common.sh@31 -- # xtrace_disable 00:13:12.385 19:13:49 -- common/autotest_common.sh@10 -- # set +x 00:13:12.642 { 00:13:12.642 "subsystems": [ 00:13:12.642 { 00:13:12.642 "subsystem": "bdev", 00:13:12.642 "config": [ 00:13:12.642 { 00:13:12.642 "params": { 00:13:12.642 "io_mechanism": "libaio", 00:13:12.642 "filename": "/dev/nullb0", 00:13:12.642 "name": "null0" 00:13:12.643 }, 00:13:12.643 "method": "bdev_xnvme_create" 00:13:12.643 }, 00:13:12.643 { 00:13:12.643 "method": "bdev_wait_for_examine" 00:13:12.643 } 00:13:12.643 ] 00:13:12.643 } 00:13:12.643 ] 00:13:12.643 } 00:13:12.643 [2024-02-14 19:13:49.908881] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:12.643 [2024-02-14 19:13:49.909115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68545 ] 00:13:12.912 [2024-02-14 19:13:50.078724] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.912 [2024-02-14 19:13:50.254369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.912 [2024-02-14 19:13:50.254486] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:13:13.182 Running I/O for 5 seconds... 00:13:18.447 00:13:18.447 Latency(us) 00:13:18.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.447 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:18.447 null0 : 5.00 123076.14 480.77 0.00 0.00 516.87 182.46 703.77 00:13:18.447 =================================================================================================================== 00:13:18.447 Total : 123076.14 480.77 0.00 0.00 516.87 182.46 703.77 00:13:18.447 [2024-02-14 19:13:55.552862] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:13:19.381 19:13:56 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:19.381 19:13:56 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:19.381 19:13:56 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:19.381 19:13:56 -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:19.381 19:13:56 -- dd/common.sh@31 -- # xtrace_disable 00:13:19.381 19:13:56 -- common/autotest_common.sh@10 -- # set +x 00:13:19.381 { 00:13:19.381 "subsystems": [ 00:13:19.381 { 00:13:19.381 "subsystem": "bdev", 00:13:19.381 "config": [ 00:13:19.381 { 00:13:19.381 "params": { 00:13:19.381 "io_mechanism": "io_uring", 00:13:19.381 "filename": "/dev/nullb0", 00:13:19.381 "name": "null0" 00:13:19.381 }, 00:13:19.381 "method": "bdev_xnvme_create" 00:13:19.381 }, 00:13:19.381 { 00:13:19.381 "method": "bdev_wait_for_examine" 00:13:19.381 } 00:13:19.381 ] 00:13:19.381 } 00:13:19.381 ] 00:13:19.381 } 00:13:19.381 [2024-02-14 19:13:56.690584] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:19.381 [2024-02-14 19:13:56.690739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68619 ] 00:13:19.639 [2024-02-14 19:13:56.859395] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.639 [2024-02-14 19:13:57.042737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.639 [2024-02-14 19:13:57.042831] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:13:20.206 Running I/O for 5 seconds... 00:13:25.476 00:13:25.476 Latency(us) 00:13:25.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.476 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:25.476 null0 : 5.00 159592.99 623.41 0.00 0.00 397.96 277.41 592.06 00:13:25.476 =================================================================================================================== 00:13:25.476 Total : 159592.99 623.41 0.00 0.00 397.96 277.41 592.06 00:13:25.476 [2024-02-14 19:14:02.328906] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:13:26.042 19:14:03 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:26.042 19:14:03 -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:26.042 00:13:26.042 real 0m13.608s 00:13:26.042 user 0m10.620s 00:13:26.042 sys 0m2.771s 00:13:26.042 19:14:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:26.042 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:13:26.042 ************************************ 00:13:26.042 END TEST xnvme_bdevperf 00:13:26.042 ************************************ 00:13:26.042 00:13:26.042 real 0m56.611s 00:13:26.042 user 0m48.242s 00:13:26.042 sys 0m7.525s 00:13:26.042 19:14:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:26.042 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:13:26.042 ************************************ 00:13:26.042 END TEST nvme_xnvme 00:13:26.042 ************************************ 00:13:26.301 19:14:03 -- spdk/autotest.sh@257 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:26.301 19:14:03 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:26.301 19:14:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:26.301 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:13:26.301 ************************************ 00:13:26.301 START TEST blockdev_xnvme 00:13:26.301 ************************************ 00:13:26.301 19:14:03 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:26.301 * Looking for test storage... 00:13:26.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:26.301 19:14:03 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:26.301 19:14:03 -- bdev/nbd_common.sh@6 -- # set -e 00:13:26.301 19:14:03 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:26.301 19:14:03 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:26.301 19:14:03 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:26.301 19:14:03 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:26.301 19:14:03 -- bdev/blockdev.sh@18 -- # : 00:13:26.301 19:14:03 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:13:26.301 19:14:03 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:13:26.301 19:14:03 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:13:26.301 19:14:03 -- bdev/blockdev.sh@672 -- # uname -s 00:13:26.301 19:14:03 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:13:26.301 19:14:03 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:13:26.301 19:14:03 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:13:26.301 19:14:03 -- bdev/blockdev.sh@681 -- # crypto_device= 00:13:26.301 19:14:03 -- bdev/blockdev.sh@682 -- # dek= 00:13:26.301 19:14:03 -- bdev/blockdev.sh@683 -- # env_ctx= 00:13:26.301 19:14:03 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:13:26.301 19:14:03 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:13:26.301 19:14:03 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:13:26.301 19:14:03 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:13:26.301 19:14:03 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:13:26.301 19:14:03 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=68765 00:13:26.301 19:14:03 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:26.301 19:14:03 -- bdev/blockdev.sh@47 -- # waitforlisten 68765 00:13:26.301 19:14:03 -- common/autotest_common.sh@817 -- # '[' -z 68765 ']' 00:13:26.301 19:14:03 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:26.301 19:14:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.301 19:14:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:26.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.301 19:14:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.301 19:14:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:26.301 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:13:26.301 [2024-02-14 19:14:03.680836] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:26.301 [2024-02-14 19:14:03.681033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68765 ] 00:13:26.559 [2024-02-14 19:14:03.851936] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.817 [2024-02-14 19:14:04.025235] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:26.817 [2024-02-14 19:14:04.025498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.191 19:14:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:28.191 19:14:05 -- common/autotest_common.sh@850 -- # return 0 00:13:28.191 19:14:05 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:13:28.191 19:14:05 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:13:28.191 19:14:05 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:13:28.191 19:14:05 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:13:28.191 19:14:05 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:28.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:28.449 Waiting for block devices as requested 00:13:28.449 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:13:28.707 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:13:28.707 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:13:28.707 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:13:34.003 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:13:34.003 19:14:11 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:13:34.003 19:14:11 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:13:34.003 19:14:11 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:13:34.003 19:14:11 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:13:34.003 19:14:11 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:13:34.003 19:14:11 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0c0n1 00:13:34.003 19:14:11 -- common/autotest_common.sh@1645 -- # local device=nvme0c0n1 00:13:34.003 19:14:11 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:13:34.003 19:14:11 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:13:34.003 19:14:11 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:13:34.003 19:14:11 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:13:34.003 19:14:11 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:13:34.003 19:14:11 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:34.003 19:14:11 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:13:34.003 19:14:11 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:13:34.003 19:14:11 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:13:34.003 19:14:11 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:13:34.003 19:14:11 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:34.003 19:14:11 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:13:34.003 19:14:11 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:13:34.003 19:14:11 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n2 00:13:34.003 19:14:11 -- common/autotest_common.sh@1645 -- # local device=nvme1n2 00:13:34.003 19:14:11 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:13:34.003 19:14:11 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:13:34.003 19:14:11 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:13:34.003 19:14:11 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n3 00:13:34.003 19:14:11 -- common/autotest_common.sh@1645 -- # local device=nvme1n3 00:13:34.003 19:14:11 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:13:34.004 19:14:11 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:13:34.004 19:14:11 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:13:34.004 19:14:11 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme2n1 00:13:34.004 19:14:11 -- common/autotest_common.sh@1645 -- # local device=nvme2n1 00:13:34.004 19:14:11 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:34.004 19:14:11 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:13:34.004 19:14:11 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:13:34.004 19:14:11 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme3n1 00:13:34.004 19:14:11 -- common/autotest_common.sh@1645 -- # local device=nvme3n1 00:13:34.004 19:14:11 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:34.004 19:14:11 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:34.004 19:14:11 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:34.004 19:14:11 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:34.004 19:14:11 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:34.004 19:14:11 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:34.004 19:14:11 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:34.004 19:14:11 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:34.004 19:14:11 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:34.004 19:14:11 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:34.004 19:14:11 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:34.004 19:14:11 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:34.004 19:14:11 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:34.004 19:14:11 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:13:34.004 19:14:11 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:13:34.004 19:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.004 19:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:34.004 19:14:11 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:34.004 nvme0n1 00:13:34.004 nvme1n1 00:13:34.004 nvme1n2 00:13:34.004 nvme1n3 00:13:34.004 nvme2n1 00:13:34.004 nvme3n1 00:13:34.004 19:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:13:34.004 19:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.004 19:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:34.004 19:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@738 -- # cat 00:13:34.004 19:14:11 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:13:34.004 19:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.004 19:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:34.004 19:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:13:34.004 19:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.004 19:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:34.004 19:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:34.004 19:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.004 19:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:34.004 19:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:13:34.004 19:14:11 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:13:34.004 19:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.004 19:14:11 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:13:34.004 19:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:34.004 19:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.004 19:14:11 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:13:34.004 19:14:11 -- bdev/blockdev.sh@747 -- # jq -r .name 00:13:34.004 19:14:11 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1e002af8-cddf-4d09-ab86-615c5fa664fc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1e002af8-cddf-4d09-ab86-615c5fa664fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "23696e61-fbb9-4ac4-9486-e5ad359115c6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "23696e61-fbb9-4ac4-9486-e5ad359115c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "776f5e33-cc91-4f86-b087-93b02bcc4f52"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "776f5e33-cc91-4f86-b087-93b02bcc4f52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "1e639dfd-c214-4a69-a5eb-3841f789be14"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1e639dfd-c214-4a69-a5eb-3841f789be14",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b1477262-76af-4101-af52-5abf6763a06d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b1477262-76af-4101-af52-5abf6763a06d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "caab780e-cd05-48ba-9227-86fe5307e7e4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "caab780e-cd05-48ba-9227-86fe5307e7e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:34.004 19:14:11 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:13:34.004 19:14:11 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:13:34.004 19:14:11 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:13:34.004 19:14:11 -- bdev/blockdev.sh@752 -- # killprocess 68765 00:13:34.004 19:14:11 -- common/autotest_common.sh@924 -- # '[' -z 68765 ']' 00:13:34.004 19:14:11 -- common/autotest_common.sh@928 -- # kill -0 68765 00:13:34.004 19:14:11 -- common/autotest_common.sh@929 -- # uname 00:13:34.004 19:14:11 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:34.004 19:14:11 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 68765 00:13:34.262 19:14:11 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:34.262 killing process with pid 68765 00:13:34.262 19:14:11 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:34.262 19:14:11 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 68765' 00:13:34.262 19:14:11 -- common/autotest_common.sh@943 -- # kill 68765 00:13:34.262 19:14:11 -- common/autotest_common.sh@948 -- # wait 68765 00:13:36.163 19:14:13 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:36.163 19:14:13 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:36.163 19:14:13 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:13:36.163 19:14:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:36.163 19:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.163 ************************************ 00:13:36.163 START TEST bdev_hello_world 00:13:36.163 ************************************ 00:13:36.163 19:14:13 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:36.163 [2024-02-14 19:14:13.439683] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:36.163 [2024-02-14 19:14:13.439848] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69156 ] 00:13:36.421 [2024-02-14 19:14:13.599383] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.421 [2024-02-14 19:14:13.776694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.421 [2024-02-14 19:14:13.776815] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:13:36.988 [2024-02-14 19:14:14.142400] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:36.988 [2024-02-14 19:14:14.142461] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:36.988 [2024-02-14 19:14:14.142532] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:36.988 [2024-02-14 19:14:14.144691] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:36.988 [2024-02-14 19:14:14.145100] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:36.988 [2024-02-14 19:14:14.145141] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:36.988 [2024-02-14 19:14:14.145406] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:36.988 00:13:36.988 [2024-02-14 19:14:14.145450] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:36.988 [2024-02-14 19:14:14.145501] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:13:37.923 00:13:37.923 real 0m1.798s 00:13:37.923 user 0m1.515s 00:13:37.923 sys 0m0.169s 00:13:37.923 19:14:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:37.923 ************************************ 00:13:37.923 END TEST bdev_hello_world 00:13:37.923 19:14:15 -- common/autotest_common.sh@10 -- # set +x 00:13:37.923 ************************************ 00:13:37.923 19:14:15 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:13:37.923 19:14:15 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:37.923 19:14:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:37.923 19:14:15 -- common/autotest_common.sh@10 -- # set +x 00:13:37.923 ************************************ 00:13:37.923 START TEST bdev_bounds 00:13:37.923 ************************************ 00:13:37.923 19:14:15 -- common/autotest_common.sh@1102 -- # bdev_bounds '' 00:13:37.923 19:14:15 -- bdev/blockdev.sh@288 -- # bdevio_pid=69198 00:13:37.923 19:14:15 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:37.923 19:14:15 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:37.923 Process bdevio pid: 69198 00:13:37.923 19:14:15 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 69198' 00:13:37.923 19:14:15 -- bdev/blockdev.sh@291 -- # waitforlisten 69198 00:13:37.923 19:14:15 -- common/autotest_common.sh@817 -- # '[' -z 69198 ']' 00:13:37.923 19:14:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.923 19:14:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:37.923 19:14:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.923 19:14:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:37.923 19:14:15 -- common/autotest_common.sh@10 -- # set +x 00:13:37.923 [2024-02-14 19:14:15.300663] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:37.923 [2024-02-14 19:14:15.300872] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69198 ] 00:13:38.181 [2024-02-14 19:14:15.472817] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:38.439 [2024-02-14 19:14:15.650512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.439 [2024-02-14 19:14:15.650558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.439 [2024-02-14 19:14:15.650548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.439 [2024-02-14 19:14:15.650899] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:13:39.004 19:14:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:39.004 19:14:16 -- common/autotest_common.sh@850 -- # return 0 00:13:39.004 19:14:16 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:39.004 I/O targets: 00:13:39.004 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:39.004 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:39.004 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:39.004 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:39.004 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:39.004 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:39.004 00:13:39.004 00:13:39.004 CUnit - A unit testing framework for C - Version 2.1-3 00:13:39.004 http://cunit.sourceforge.net/ 00:13:39.004 00:13:39.004 00:13:39.004 Suite: bdevio tests on: nvme3n1 00:13:39.004 Test: blockdev write read block ...passed 00:13:39.004 Test: blockdev write zeroes read block ...passed 00:13:39.004 Test: blockdev write zeroes read no split ...passed 00:13:39.004 Test: blockdev write zeroes read split ...passed 00:13:39.004 Test: blockdev write zeroes read split partial ...passed 00:13:39.004 Test: blockdev reset ...passed 00:13:39.004 Test: blockdev write read 8 blocks ...passed 00:13:39.004 Test: blockdev write read size > 128k ...passed 00:13:39.004 Test: blockdev write read invalid size ...passed 00:13:39.004 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:39.004 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:39.004 Test: blockdev write read max offset ...passed 00:13:39.004 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:39.004 Test: blockdev writev readv 8 blocks ...passed 00:13:39.004 Test: blockdev writev readv 30 x 1block ...passed 00:13:39.004 Test: blockdev writev readv block ...passed 00:13:39.004 Test: blockdev writev readv size > 128k ...passed 00:13:39.004 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:39.004 Test: blockdev comparev and writev ...passed 00:13:39.004 Test: blockdev nvme passthru rw ...passed 00:13:39.004 Test: blockdev nvme passthru vendor specific ...passed 00:13:39.004 Test: blockdev nvme admin passthru ...passed 00:13:39.004 Test: blockdev copy ...passed 00:13:39.004 Suite: bdevio tests on: nvme2n1 00:13:39.004 Test: blockdev write read block ...passed 00:13:39.004 Test: blockdev write zeroes read block ...passed 00:13:39.004 Test: blockdev write zeroes read no split ...passed 00:13:39.262 Test: blockdev write zeroes read split ...passed 00:13:39.262 Test: blockdev write zeroes read split partial ...passed 00:13:39.262 Test: blockdev reset ...passed 00:13:39.262 Test: blockdev write read 8 blocks ...passed 00:13:39.262 Test: blockdev write read size > 128k ...passed 00:13:39.262 Test: blockdev write read invalid size ...passed 00:13:39.262 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:39.262 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:39.262 Test: blockdev write read max offset ...passed 00:13:39.262 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:39.262 Test: blockdev writev readv 8 blocks ...passed 00:13:39.262 Test: blockdev writev readv 30 x 1block ...passed 00:13:39.262 Test: blockdev writev readv block ...passed 00:13:39.262 Test: blockdev writev readv size > 128k ...passed 00:13:39.262 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:39.262 Test: blockdev comparev and writev ...passed 00:13:39.262 Test: blockdev nvme passthru rw ...passed 00:13:39.262 Test: blockdev nvme passthru vendor specific ...passed 00:13:39.262 Test: blockdev nvme admin passthru ...passed 00:13:39.262 Test: blockdev copy ...passed 00:13:39.262 Suite: bdevio tests on: nvme1n3 00:13:39.262 Test: blockdev write read block ...passed 00:13:39.262 Test: blockdev write zeroes read block ...passed 00:13:39.262 Test: blockdev write zeroes read no split ...passed 00:13:39.262 Test: blockdev write zeroes read split ...passed 00:13:39.262 Test: blockdev write zeroes read split partial ...passed 00:13:39.262 Test: blockdev reset ...passed 00:13:39.262 Test: blockdev write read 8 blocks ...passed 00:13:39.262 Test: blockdev write read size > 128k ...passed 00:13:39.262 Test: blockdev write read invalid size ...passed 00:13:39.262 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:39.262 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:39.262 Test: blockdev write read max offset ...passed 00:13:39.262 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:39.262 Test: blockdev writev readv 8 blocks ...passed 00:13:39.262 Test: blockdev writev readv 30 x 1block ...passed 00:13:39.262 Test: blockdev writev readv block ...passed 00:13:39.262 Test: blockdev writev readv size > 128k ...passed 00:13:39.262 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:39.262 Test: blockdev comparev and writev ...passed 00:13:39.262 Test: blockdev nvme passthru rw ...passed 00:13:39.262 Test: blockdev nvme passthru vendor specific ...passed 00:13:39.262 Test: blockdev nvme admin passthru ...passed 00:13:39.262 Test: blockdev copy ...passed 00:13:39.262 Suite: bdevio tests on: nvme1n2 00:13:39.262 Test: blockdev write read block ...passed 00:13:39.262 Test: blockdev write zeroes read block ...passed 00:13:39.262 Test: blockdev write zeroes read no split ...passed 00:13:39.262 Test: blockdev write zeroes read split ...passed 00:13:39.262 Test: blockdev write zeroes read split partial ...passed 00:13:39.262 Test: blockdev reset ...passed 00:13:39.262 Test: blockdev write read 8 blocks ...passed 00:13:39.262 Test: blockdev write read size > 128k ...passed 00:13:39.262 Test: blockdev write read invalid size ...passed 00:13:39.262 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:39.262 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:39.262 Test: blockdev write read max offset ...passed 00:13:39.262 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:39.262 Test: blockdev writev readv 8 blocks ...passed 00:13:39.262 Test: blockdev writev readv 30 x 1block ...passed 00:13:39.262 Test: blockdev writev readv block ...passed 00:13:39.262 Test: blockdev writev readv size > 128k ...passed 00:13:39.262 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:39.262 Test: blockdev comparev and writev ...passed 00:13:39.262 Test: blockdev nvme passthru rw ...passed 00:13:39.262 Test: blockdev nvme passthru vendor specific ...passed 00:13:39.262 Test: blockdev nvme admin passthru ...passed 00:13:39.262 Test: blockdev copy ...passed 00:13:39.262 Suite: bdevio tests on: nvme1n1 00:13:39.262 Test: blockdev write read block ...passed 00:13:39.262 Test: blockdev write zeroes read block ...passed 00:13:39.262 Test: blockdev write zeroes read no split ...passed 00:13:39.262 Test: blockdev write zeroes read split ...passed 00:13:39.520 Test: blockdev write zeroes read split partial ...passed 00:13:39.520 Test: blockdev reset ...passed 00:13:39.520 Test: blockdev write read 8 blocks ...passed 00:13:39.520 Test: blockdev write read size > 128k ...passed 00:13:39.520 Test: blockdev write read invalid size ...passed 00:13:39.520 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:39.520 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:39.520 Test: blockdev write read max offset ...passed 00:13:39.520 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:39.520 Test: blockdev writev readv 8 blocks ...passed 00:13:39.520 Test: blockdev writev readv 30 x 1block ...passed 00:13:39.520 Test: blockdev writev readv block ...passed 00:13:39.520 Test: blockdev writev readv size > 128k ...passed 00:13:39.520 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:39.520 Test: blockdev comparev and writev ...passed 00:13:39.520 Test: blockdev nvme passthru rw ...passed 00:13:39.520 Test: blockdev nvme passthru vendor specific ...passed 00:13:39.520 Test: blockdev nvme admin passthru ...passed 00:13:39.520 Test: blockdev copy ...passed 00:13:39.520 Suite: bdevio tests on: nvme0n1 00:13:39.520 Test: blockdev write read block ...passed 00:13:39.520 Test: blockdev write zeroes read block ...passed 00:13:39.520 Test: blockdev write zeroes read no split ...passed 00:13:39.520 Test: blockdev write zeroes read split ...passed 00:13:39.520 Test: blockdev write zeroes read split partial ...passed 00:13:39.520 Test: blockdev reset ...passed 00:13:39.520 Test: blockdev write read 8 blocks ...passed 00:13:39.520 Test: blockdev write read size > 128k ...passed 00:13:39.520 Test: blockdev write read invalid size ...passed 00:13:39.520 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:39.520 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:39.520 Test: blockdev write read max offset ...passed 00:13:39.520 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:39.520 Test: blockdev writev readv 8 blocks ...passed 00:13:39.520 Test: blockdev writev readv 30 x 1block ...passed 00:13:39.520 Test: blockdev writev readv block ...passed 00:13:39.520 Test: blockdev writev readv size > 128k ...passed 00:13:39.520 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:39.520 Test: blockdev comparev and writev ...passed 00:13:39.520 Test: blockdev nvme passthru rw ...passed 00:13:39.520 Test: blockdev nvme passthru vendor specific ...passed 00:13:39.520 Test: blockdev nvme admin passthru ...passed 00:13:39.520 Test: blockdev copy ...passed 00:13:39.520 00:13:39.520 Run Summary: Type Total Ran Passed Failed Inactive 00:13:39.520 suites 6 6 n/a 0 0 00:13:39.521 tests 138 138 138 0 0 00:13:39.521 asserts 780 780 780 0 n/a 00:13:39.521 00:13:39.521 Elapsed time = 1.174 seconds 00:13:39.521 0 00:13:39.521 19:14:16 -- bdev/blockdev.sh@293 -- # killprocess 69198 00:13:39.521 19:14:16 -- common/autotest_common.sh@924 -- # '[' -z 69198 ']' 00:13:39.521 19:14:16 -- common/autotest_common.sh@928 -- # kill -0 69198 00:13:39.521 19:14:16 -- common/autotest_common.sh@929 -- # uname 00:13:39.521 19:14:16 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:39.521 19:14:16 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 69198 00:13:39.521 killing process with pid 69198 00:13:39.521 19:14:16 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:39.521 19:14:16 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:39.521 19:14:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 69198' 00:13:39.521 19:14:16 -- common/autotest_common.sh@943 -- # kill 69198 00:13:39.521 [2024-02-14 19:14:16.797476] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:13:39.521 19:14:16 -- common/autotest_common.sh@948 -- # wait 69198 00:13:40.455 19:14:17 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:13:40.455 00:13:40.455 real 0m2.623s 00:13:40.455 user 0m6.343s 00:13:40.455 sys 0m0.330s 00:13:40.455 ************************************ 00:13:40.455 END TEST bdev_bounds 00:13:40.455 ************************************ 00:13:40.456 19:14:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:40.456 19:14:17 -- common/autotest_common.sh@10 -- # set +x 00:13:40.713 19:14:17 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:13:40.713 19:14:17 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:13:40.713 19:14:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:40.713 19:14:17 -- common/autotest_common.sh@10 -- # set +x 00:13:40.713 ************************************ 00:13:40.713 START TEST bdev_nbd 00:13:40.713 ************************************ 00:13:40.713 19:14:17 -- common/autotest_common.sh@1102 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:13:40.713 19:14:17 -- bdev/blockdev.sh@298 -- # uname -s 00:13:40.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:40.713 19:14:17 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:13:40.713 19:14:17 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:40.713 19:14:17 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:40.713 19:14:17 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:40.713 19:14:17 -- bdev/blockdev.sh@302 -- # local bdev_all 00:13:40.713 19:14:17 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:13:40.713 19:14:17 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:13:40.713 19:14:17 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:40.713 19:14:17 -- bdev/blockdev.sh@309 -- # local nbd_all 00:13:40.713 19:14:17 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:13:40.713 19:14:17 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:40.713 19:14:17 -- bdev/blockdev.sh@312 -- # local nbd_list 00:13:40.713 19:14:17 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:40.713 19:14:17 -- bdev/blockdev.sh@313 -- # local bdev_list 00:13:40.713 19:14:17 -- bdev/blockdev.sh@316 -- # nbd_pid=69252 00:13:40.713 19:14:17 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:40.713 19:14:17 -- bdev/blockdev.sh@318 -- # waitforlisten 69252 /var/tmp/spdk-nbd.sock 00:13:40.713 19:14:17 -- common/autotest_common.sh@817 -- # '[' -z 69252 ']' 00:13:40.713 19:14:17 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:40.713 19:14:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:40.714 19:14:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:40.714 19:14:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:40.714 19:14:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:40.714 19:14:17 -- common/autotest_common.sh@10 -- # set +x 00:13:40.714 [2024-02-14 19:14:17.968362] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:40.714 [2024-02-14 19:14:17.968800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.714 [2024-02-14 19:14:18.128168] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.972 [2024-02-14 19:14:18.303617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.972 [2024-02-14 19:14:18.303924] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:13:41.537 19:14:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:41.537 19:14:18 -- common/autotest_common.sh@850 -- # return 0 00:13:41.537 19:14:18 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:13:41.537 19:14:18 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:41.537 19:14:18 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:41.537 19:14:18 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:41.537 19:14:18 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:13:41.537 19:14:18 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:41.537 19:14:18 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:41.537 19:14:18 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:41.537 19:14:18 -- bdev/nbd_common.sh@24 -- # local i 00:13:41.537 19:14:18 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:41.537 19:14:18 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:41.537 19:14:18 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:41.537 19:14:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:41.796 19:14:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:41.796 19:14:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:41.796 19:14:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:41.796 19:14:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:13:41.796 19:14:19 -- common/autotest_common.sh@855 -- # local i 00:13:41.796 19:14:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:41.796 19:14:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:41.796 19:14:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:13:41.796 19:14:19 -- common/autotest_common.sh@859 -- # break 00:13:41.796 19:14:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:41.796 19:14:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:41.796 19:14:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.796 1+0 records in 00:13:41.796 1+0 records out 00:13:41.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532462 s, 7.7 MB/s 00:13:41.796 19:14:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.796 19:14:19 -- common/autotest_common.sh@872 -- # size=4096 00:13:41.796 19:14:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.796 19:14:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:41.796 19:14:19 -- common/autotest_common.sh@875 -- # return 0 00:13:41.796 19:14:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:41.796 19:14:19 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:41.796 19:14:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:42.054 19:14:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:42.054 19:14:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:42.054 19:14:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:42.054 19:14:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:13:42.054 19:14:19 -- common/autotest_common.sh@855 -- # local i 00:13:42.054 19:14:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:42.054 19:14:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:42.054 19:14:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:13:42.054 19:14:19 -- common/autotest_common.sh@859 -- # break 00:13:42.054 19:14:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:42.054 19:14:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:42.054 19:14:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.054 1+0 records in 00:13:42.054 1+0 records out 00:13:42.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523489 s, 7.8 MB/s 00:13:42.054 19:14:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.054 19:14:19 -- common/autotest_common.sh@872 -- # size=4096 00:13:42.054 19:14:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.054 19:14:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:42.054 19:14:19 -- common/autotest_common.sh@875 -- # return 0 00:13:42.054 19:14:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:42.054 19:14:19 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:42.054 19:14:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:13:42.313 19:14:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:42.313 19:14:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:42.313 19:14:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:42.313 19:14:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:13:42.313 19:14:19 -- common/autotest_common.sh@855 -- # local i 00:13:42.313 19:14:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:42.313 19:14:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:42.313 19:14:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:13:42.313 19:14:19 -- common/autotest_common.sh@859 -- # break 00:13:42.313 19:14:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:42.313 19:14:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:42.313 19:14:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.313 1+0 records in 00:13:42.313 1+0 records out 00:13:42.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048705 s, 8.4 MB/s 00:13:42.313 19:14:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.313 19:14:19 -- common/autotest_common.sh@872 -- # size=4096 00:13:42.313 19:14:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.313 19:14:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:42.313 19:14:19 -- common/autotest_common.sh@875 -- # return 0 00:13:42.313 19:14:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:42.313 19:14:19 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:42.313 19:14:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:13:42.572 19:14:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:42.572 19:14:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:42.572 19:14:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:42.572 19:14:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:13:42.572 19:14:19 -- common/autotest_common.sh@855 -- # local i 00:13:42.572 19:14:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:42.572 19:14:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:42.572 19:14:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:13:42.572 19:14:19 -- common/autotest_common.sh@859 -- # break 00:13:42.572 19:14:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:42.572 19:14:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:42.572 19:14:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.572 1+0 records in 00:13:42.572 1+0 records out 00:13:42.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120204 s, 3.4 MB/s 00:13:42.572 19:14:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.572 19:14:19 -- common/autotest_common.sh@872 -- # size=4096 00:13:42.572 19:14:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.572 19:14:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:42.572 19:14:19 -- common/autotest_common.sh@875 -- # return 0 00:13:42.572 19:14:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:42.572 19:14:19 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:42.572 19:14:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:42.831 19:14:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:42.831 19:14:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:42.831 19:14:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:42.831 19:14:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:13:42.831 19:14:20 -- common/autotest_common.sh@855 -- # local i 00:13:42.831 19:14:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:42.831 19:14:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:42.831 19:14:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:13:42.831 19:14:20 -- common/autotest_common.sh@859 -- # break 00:13:42.831 19:14:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:42.831 19:14:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:42.831 19:14:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.831 1+0 records in 00:13:42.831 1+0 records out 00:13:42.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724065 s, 5.7 MB/s 00:13:42.831 19:14:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.831 19:14:20 -- common/autotest_common.sh@872 -- # size=4096 00:13:42.831 19:14:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.831 19:14:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:42.831 19:14:20 -- common/autotest_common.sh@875 -- # return 0 00:13:42.831 19:14:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:42.831 19:14:20 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:42.831 19:14:20 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:43.089 19:14:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:43.089 19:14:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:43.089 19:14:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:43.089 19:14:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:13:43.089 19:14:20 -- common/autotest_common.sh@855 -- # local i 00:13:43.089 19:14:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:43.089 19:14:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:43.089 19:14:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:13:43.089 19:14:20 -- common/autotest_common.sh@859 -- # break 00:13:43.089 19:14:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:43.089 19:14:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:43.089 19:14:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.089 1+0 records in 00:13:43.089 1+0 records out 00:13:43.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106898 s, 3.8 MB/s 00:13:43.089 19:14:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.089 19:14:20 -- common/autotest_common.sh@872 -- # size=4096 00:13:43.089 19:14:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.089 19:14:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:43.089 19:14:20 -- common/autotest_common.sh@875 -- # return 0 00:13:43.089 19:14:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:43.089 19:14:20 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:43.089 19:14:20 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:43.348 19:14:20 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:43.348 { 00:13:43.348 "nbd_device": "/dev/nbd0", 00:13:43.348 "bdev_name": "nvme0n1" 00:13:43.348 }, 00:13:43.348 { 00:13:43.348 "nbd_device": "/dev/nbd1", 00:13:43.348 "bdev_name": "nvme1n1" 00:13:43.348 }, 00:13:43.348 { 00:13:43.348 "nbd_device": "/dev/nbd2", 00:13:43.348 "bdev_name": "nvme1n2" 00:13:43.348 }, 00:13:43.348 { 00:13:43.348 "nbd_device": "/dev/nbd3", 00:13:43.348 "bdev_name": "nvme1n3" 00:13:43.348 }, 00:13:43.348 { 00:13:43.348 "nbd_device": "/dev/nbd4", 00:13:43.348 "bdev_name": "nvme2n1" 00:13:43.348 }, 00:13:43.348 { 00:13:43.348 "nbd_device": "/dev/nbd5", 00:13:43.348 "bdev_name": "nvme3n1" 00:13:43.348 } 00:13:43.348 ]' 00:13:43.348 19:14:20 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:43.348 19:14:20 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:43.348 19:14:20 -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:43.348 { 00:13:43.348 "nbd_device": "/dev/nbd0", 00:13:43.348 "bdev_name": "nvme0n1" 00:13:43.348 }, 00:13:43.348 { 00:13:43.348 "nbd_device": "/dev/nbd1", 00:13:43.348 "bdev_name": "nvme1n1" 00:13:43.348 }, 00:13:43.348 { 00:13:43.348 "nbd_device": "/dev/nbd2", 00:13:43.348 "bdev_name": "nvme1n2" 00:13:43.348 }, 00:13:43.348 { 00:13:43.348 "nbd_device": "/dev/nbd3", 00:13:43.348 "bdev_name": "nvme1n3" 00:13:43.348 }, 00:13:43.348 { 00:13:43.348 "nbd_device": "/dev/nbd4", 00:13:43.348 "bdev_name": "nvme2n1" 00:13:43.348 }, 00:13:43.348 { 00:13:43.348 "nbd_device": "/dev/nbd5", 00:13:43.348 "bdev_name": "nvme3n1" 00:13:43.348 } 00:13:43.348 ]' 00:13:43.348 19:14:20 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:43.348 19:14:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.348 19:14:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:43.348 19:14:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.348 19:14:20 -- bdev/nbd_common.sh@51 -- # local i 00:13:43.348 19:14:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.348 19:14:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:43.606 19:14:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.606 19:14:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.606 19:14:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.606 19:14:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.606 19:14:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.606 19:14:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.606 19:14:20 -- bdev/nbd_common.sh@41 -- # break 00:13:43.606 19:14:20 -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.606 19:14:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.606 19:14:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:43.864 19:14:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:43.864 19:14:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:43.864 19:14:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:43.864 19:14:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.864 19:14:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.864 19:14:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:43.864 19:14:21 -- bdev/nbd_common.sh@41 -- # break 00:13:43.864 19:14:21 -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.864 19:14:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.864 19:14:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:44.122 19:14:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:44.122 19:14:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:44.122 19:14:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:44.122 19:14:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.122 19:14:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.122 19:14:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:44.122 19:14:21 -- bdev/nbd_common.sh@41 -- # break 00:13:44.122 19:14:21 -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.122 19:14:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.122 19:14:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:44.380 19:14:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:44.380 19:14:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:44.380 19:14:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:44.380 19:14:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.380 19:14:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.380 19:14:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:44.380 19:14:21 -- bdev/nbd_common.sh@41 -- # break 00:13:44.380 19:14:21 -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.380 19:14:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.380 19:14:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:44.638 19:14:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:44.638 19:14:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:44.638 19:14:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:44.638 19:14:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.638 19:14:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.638 19:14:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:44.638 19:14:21 -- bdev/nbd_common.sh@41 -- # break 00:13:44.638 19:14:21 -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.638 19:14:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.638 19:14:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:44.896 19:14:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:44.896 19:14:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:44.896 19:14:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:44.896 19:14:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.896 19:14:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.896 19:14:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:44.896 19:14:22 -- bdev/nbd_common.sh@41 -- # break 00:13:44.896 19:14:22 -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.896 19:14:22 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:44.896 19:14:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.896 19:14:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@65 -- # true 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@65 -- # count=0 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@122 -- # count=0 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@127 -- # return 0 00:13:45.154 19:14:22 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:45.154 19:14:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:45.155 19:14:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:45.155 19:14:22 -- bdev/nbd_common.sh@12 -- # local i 00:13:45.155 19:14:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:45.155 19:14:22 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:45.155 19:14:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:45.484 /dev/nbd0 00:13:45.484 19:14:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:45.484 19:14:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:45.484 19:14:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:13:45.484 19:14:22 -- common/autotest_common.sh@855 -- # local i 00:13:45.484 19:14:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:45.484 19:14:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:45.484 19:14:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:13:45.484 19:14:22 -- common/autotest_common.sh@859 -- # break 00:13:45.484 19:14:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:45.484 19:14:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:45.484 19:14:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:45.484 1+0 records in 00:13:45.484 1+0 records out 00:13:45.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000974283 s, 4.2 MB/s 00:13:45.484 19:14:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.484 19:14:22 -- common/autotest_common.sh@872 -- # size=4096 00:13:45.484 19:14:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.484 19:14:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:45.484 19:14:22 -- common/autotest_common.sh@875 -- # return 0 00:13:45.484 19:14:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.484 19:14:22 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:45.484 19:14:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:45.742 /dev/nbd1 00:13:45.742 19:14:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:45.742 19:14:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:45.742 19:14:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:13:45.742 19:14:22 -- common/autotest_common.sh@855 -- # local i 00:13:45.742 19:14:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:45.742 19:14:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:45.742 19:14:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:13:45.742 19:14:22 -- common/autotest_common.sh@859 -- # break 00:13:45.742 19:14:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:45.742 19:14:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:45.742 19:14:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:45.742 1+0 records in 00:13:45.742 1+0 records out 00:13:45.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362869 s, 11.3 MB/s 00:13:45.742 19:14:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.742 19:14:22 -- common/autotest_common.sh@872 -- # size=4096 00:13:45.742 19:14:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.742 19:14:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:45.742 19:14:22 -- common/autotest_common.sh@875 -- # return 0 00:13:45.742 19:14:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.742 19:14:22 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:45.742 19:14:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:13:46.000 /dev/nbd10 00:13:46.000 19:14:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:46.000 19:14:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:46.000 19:14:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:13:46.000 19:14:23 -- common/autotest_common.sh@855 -- # local i 00:13:46.001 19:14:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:46.001 19:14:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:46.001 19:14:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:13:46.001 19:14:23 -- common/autotest_common.sh@859 -- # break 00:13:46.001 19:14:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:46.001 19:14:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:46.001 19:14:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.001 1+0 records in 00:13:46.001 1+0 records out 00:13:46.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530907 s, 7.7 MB/s 00:13:46.001 19:14:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.001 19:14:23 -- common/autotest_common.sh@872 -- # size=4096 00:13:46.001 19:14:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.001 19:14:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:46.001 19:14:23 -- common/autotest_common.sh@875 -- # return 0 00:13:46.001 19:14:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.001 19:14:23 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:46.001 19:14:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:13:46.267 /dev/nbd11 00:13:46.267 19:14:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:46.267 19:14:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:46.267 19:14:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:13:46.267 19:14:23 -- common/autotest_common.sh@855 -- # local i 00:13:46.267 19:14:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:46.267 19:14:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:46.267 19:14:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:13:46.267 19:14:23 -- common/autotest_common.sh@859 -- # break 00:13:46.267 19:14:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:46.267 19:14:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:46.267 19:14:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.267 1+0 records in 00:13:46.267 1+0 records out 00:13:46.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484158 s, 8.5 MB/s 00:13:46.267 19:14:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.267 19:14:23 -- common/autotest_common.sh@872 -- # size=4096 00:13:46.267 19:14:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.267 19:14:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:46.267 19:14:23 -- common/autotest_common.sh@875 -- # return 0 00:13:46.267 19:14:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.267 19:14:23 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:46.267 19:14:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:13:46.525 /dev/nbd12 00:13:46.525 19:14:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:46.525 19:14:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:46.525 19:14:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:13:46.525 19:14:23 -- common/autotest_common.sh@855 -- # local i 00:13:46.525 19:14:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:46.525 19:14:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:46.525 19:14:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:13:46.525 19:14:23 -- common/autotest_common.sh@859 -- # break 00:13:46.525 19:14:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:46.525 19:14:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:46.525 19:14:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.525 1+0 records in 00:13:46.525 1+0 records out 00:13:46.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000944293 s, 4.3 MB/s 00:13:46.525 19:14:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.525 19:14:23 -- common/autotest_common.sh@872 -- # size=4096 00:13:46.525 19:14:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.525 19:14:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:46.525 19:14:23 -- common/autotest_common.sh@875 -- # return 0 00:13:46.525 19:14:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.525 19:14:23 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:46.525 19:14:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:46.783 /dev/nbd13 00:13:46.783 19:14:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:46.783 19:14:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:46.783 19:14:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:13:46.783 19:14:23 -- common/autotest_common.sh@855 -- # local i 00:13:46.783 19:14:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:46.783 19:14:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:46.783 19:14:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:13:46.783 19:14:23 -- common/autotest_common.sh@859 -- # break 00:13:46.783 19:14:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:46.783 19:14:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:46.783 19:14:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.783 1+0 records in 00:13:46.783 1+0 records out 00:13:46.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111769 s, 3.7 MB/s 00:13:46.783 19:14:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.783 19:14:24 -- common/autotest_common.sh@872 -- # size=4096 00:13:46.783 19:14:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.783 19:14:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:46.783 19:14:24 -- common/autotest_common.sh@875 -- # return 0 00:13:46.783 19:14:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.783 19:14:24 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:46.783 19:14:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:46.783 19:14:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:46.783 19:14:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:47.041 { 00:13:47.041 "nbd_device": "/dev/nbd0", 00:13:47.041 "bdev_name": "nvme0n1" 00:13:47.041 }, 00:13:47.041 { 00:13:47.041 "nbd_device": "/dev/nbd1", 00:13:47.041 "bdev_name": "nvme1n1" 00:13:47.041 }, 00:13:47.041 { 00:13:47.041 "nbd_device": "/dev/nbd10", 00:13:47.041 "bdev_name": "nvme1n2" 00:13:47.041 }, 00:13:47.041 { 00:13:47.041 "nbd_device": "/dev/nbd11", 00:13:47.041 "bdev_name": "nvme1n3" 00:13:47.041 }, 00:13:47.041 { 00:13:47.041 "nbd_device": "/dev/nbd12", 00:13:47.041 "bdev_name": "nvme2n1" 00:13:47.041 }, 00:13:47.041 { 00:13:47.041 "nbd_device": "/dev/nbd13", 00:13:47.041 "bdev_name": "nvme3n1" 00:13:47.041 } 00:13:47.041 ]' 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:47.041 { 00:13:47.041 "nbd_device": "/dev/nbd0", 00:13:47.041 "bdev_name": "nvme0n1" 00:13:47.041 }, 00:13:47.041 { 00:13:47.041 "nbd_device": "/dev/nbd1", 00:13:47.041 "bdev_name": "nvme1n1" 00:13:47.041 }, 00:13:47.041 { 00:13:47.041 "nbd_device": "/dev/nbd10", 00:13:47.041 "bdev_name": "nvme1n2" 00:13:47.041 }, 00:13:47.041 { 00:13:47.041 "nbd_device": "/dev/nbd11", 00:13:47.041 "bdev_name": "nvme1n3" 00:13:47.041 }, 00:13:47.041 { 00:13:47.041 "nbd_device": "/dev/nbd12", 00:13:47.041 "bdev_name": "nvme2n1" 00:13:47.041 }, 00:13:47.041 { 00:13:47.041 "nbd_device": "/dev/nbd13", 00:13:47.041 "bdev_name": "nvme3n1" 00:13:47.041 } 00:13:47.041 ]' 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:47.041 /dev/nbd1 00:13:47.041 /dev/nbd10 00:13:47.041 /dev/nbd11 00:13:47.041 /dev/nbd12 00:13:47.041 /dev/nbd13' 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:47.041 /dev/nbd1 00:13:47.041 /dev/nbd10 00:13:47.041 /dev/nbd11 00:13:47.041 /dev/nbd12 00:13:47.041 /dev/nbd13' 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@65 -- # count=6 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@66 -- # echo 6 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@95 -- # count=6 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:47.041 256+0 records in 00:13:47.041 256+0 records out 00:13:47.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639213 s, 164 MB/s 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:47.041 19:14:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:47.299 256+0 records in 00:13:47.299 256+0 records out 00:13:47.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171717 s, 6.1 MB/s 00:13:47.299 19:14:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:47.299 19:14:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:47.299 256+0 records in 00:13:47.299 256+0 records out 00:13:47.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172904 s, 6.1 MB/s 00:13:47.299 19:14:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:47.299 19:14:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:47.556 256+0 records in 00:13:47.556 256+0 records out 00:13:47.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174825 s, 6.0 MB/s 00:13:47.556 19:14:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:47.556 19:14:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:47.814 256+0 records in 00:13:47.814 256+0 records out 00:13:47.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164782 s, 6.4 MB/s 00:13:47.814 19:14:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:47.814 19:14:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:47.814 256+0 records in 00:13:47.814 256+0 records out 00:13:47.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18205 s, 5.8 MB/s 00:13:47.814 19:14:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:47.814 19:14:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:48.072 256+0 records in 00:13:48.072 256+0 records out 00:13:48.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170799 s, 6.1 MB/s 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@51 -- # local i 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.072 19:14:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:48.331 19:14:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:48.331 19:14:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:48.331 19:14:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:48.331 19:14:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.331 19:14:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.331 19:14:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:48.331 19:14:25 -- bdev/nbd_common.sh@41 -- # break 00:13:48.331 19:14:25 -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.331 19:14:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.331 19:14:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:48.588 19:14:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@41 -- # break 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@41 -- # break 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.846 19:14:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:49.104 19:14:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:49.104 19:14:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:49.104 19:14:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:49.104 19:14:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:49.104 19:14:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:49.104 19:14:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:49.104 19:14:26 -- bdev/nbd_common.sh@41 -- # break 00:13:49.104 19:14:26 -- bdev/nbd_common.sh@45 -- # return 0 00:13:49.104 19:14:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:49.104 19:14:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:49.362 19:14:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:49.362 19:14:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:49.362 19:14:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:49.362 19:14:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:49.362 19:14:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:49.362 19:14:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:49.362 19:14:26 -- bdev/nbd_common.sh@41 -- # break 00:13:49.362 19:14:26 -- bdev/nbd_common.sh@45 -- # return 0 00:13:49.362 19:14:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:49.362 19:14:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:49.620 19:14:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:49.620 19:14:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:49.620 19:14:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:49.620 19:14:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:49.620 19:14:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:49.620 19:14:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:49.620 19:14:26 -- bdev/nbd_common.sh@41 -- # break 00:13:49.620 19:14:27 -- bdev/nbd_common.sh@45 -- # return 0 00:13:49.620 19:14:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:49.620 19:14:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:49.620 19:14:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@65 -- # true 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@65 -- # count=0 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@104 -- # count=0 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@109 -- # return 0 00:13:49.878 19:14:27 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:49.878 19:14:27 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:50.135 malloc_lvol_verify 00:13:50.135 19:14:27 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:50.393 9f025b10-6e92-49ad-9bc6-6488a7b6a242 00:13:50.650 19:14:27 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:50.650 0eb12488-9c60-4492-a39d-606442c27b11 00:13:50.650 19:14:28 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:50.907 /dev/nbd0 00:13:50.907 19:14:28 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:50.907 mke2fs 1.46.5 (30-Dec-2021) 00:13:50.907 Discarding device blocks: 0/4096 done 00:13:50.907 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:50.907 00:13:50.907 Allocating group tables: 0/1 done 00:13:50.907 Writing inode tables: 0/1 done 00:13:50.907 Creating journal (1024 blocks): done 00:13:50.907 Writing superblocks and filesystem accounting information: 0/1 done 00:13:50.907 00:13:50.907 19:14:28 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:50.907 19:14:28 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:50.907 19:14:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:50.907 19:14:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:50.907 19:14:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.907 19:14:28 -- bdev/nbd_common.sh@51 -- # local i 00:13:50.907 19:14:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.907 19:14:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:51.163 19:14:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:51.163 19:14:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:51.163 19:14:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:51.163 19:14:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.163 19:14:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.163 19:14:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:51.163 19:14:28 -- bdev/nbd_common.sh@41 -- # break 00:13:51.163 19:14:28 -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.163 19:14:28 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:51.163 19:14:28 -- bdev/nbd_common.sh@147 -- # return 0 00:13:51.163 19:14:28 -- bdev/blockdev.sh@324 -- # killprocess 69252 00:13:51.163 19:14:28 -- common/autotest_common.sh@924 -- # '[' -z 69252 ']' 00:13:51.163 19:14:28 -- common/autotest_common.sh@928 -- # kill -0 69252 00:13:51.163 19:14:28 -- common/autotest_common.sh@929 -- # uname 00:13:51.163 19:14:28 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:51.163 19:14:28 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 69252 00:13:51.163 19:14:28 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:51.163 19:14:28 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:51.163 killing process with pid 69252 00:13:51.163 19:14:28 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 69252' 00:13:51.163 19:14:28 -- common/autotest_common.sh@943 -- # kill 69252 00:13:51.163 [2024-02-14 19:14:28.558273] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:13:51.163 19:14:28 -- common/autotest_common.sh@948 -- # wait 69252 00:13:52.535 19:14:29 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:13:52.535 00:13:52.535 real 0m11.774s 00:13:52.535 user 0m16.398s 00:13:52.535 sys 0m3.869s 00:13:52.535 19:14:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:52.535 19:14:29 -- common/autotest_common.sh@10 -- # set +x 00:13:52.535 ************************************ 00:13:52.535 END TEST bdev_nbd 00:13:52.535 ************************************ 00:13:52.535 19:14:29 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:13:52.535 19:14:29 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:13:52.535 19:14:29 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:13:52.535 19:14:29 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:13:52.535 19:14:29 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:52.535 19:14:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:52.535 19:14:29 -- common/autotest_common.sh@10 -- # set +x 00:13:52.535 ************************************ 00:13:52.535 START TEST bdev_fio 00:13:52.535 ************************************ 00:13:52.535 19:14:29 -- common/autotest_common.sh@1102 -- # fio_test_suite '' 00:13:52.535 19:14:29 -- bdev/blockdev.sh@329 -- # local env_context 00:13:52.535 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:52.535 19:14:29 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:52.535 19:14:29 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:52.535 19:14:29 -- bdev/blockdev.sh@337 -- # echo '' 00:13:52.535 19:14:29 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:13:52.535 19:14:29 -- bdev/blockdev.sh@337 -- # env_context= 00:13:52.535 19:14:29 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:52.535 19:14:29 -- common/autotest_common.sh@1257 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:52.535 19:14:29 -- common/autotest_common.sh@1258 -- # local workload=verify 00:13:52.535 19:14:29 -- common/autotest_common.sh@1259 -- # local bdev_type=AIO 00:13:52.535 19:14:29 -- common/autotest_common.sh@1260 -- # local env_context= 00:13:52.535 19:14:29 -- common/autotest_common.sh@1261 -- # local fio_dir=/usr/src/fio 00:13:52.535 19:14:29 -- common/autotest_common.sh@1263 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:52.535 19:14:29 -- common/autotest_common.sh@1268 -- # '[' -z verify ']' 00:13:52.535 19:14:29 -- common/autotest_common.sh@1272 -- # '[' -n '' ']' 00:13:52.535 19:14:29 -- common/autotest_common.sh@1276 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:52.535 19:14:29 -- common/autotest_common.sh@1278 -- # cat 00:13:52.535 19:14:29 -- common/autotest_common.sh@1290 -- # '[' verify == verify ']' 00:13:52.535 19:14:29 -- common/autotest_common.sh@1291 -- # cat 00:13:52.535 19:14:29 -- common/autotest_common.sh@1300 -- # '[' AIO == AIO ']' 00:13:52.535 19:14:29 -- common/autotest_common.sh@1301 -- # /usr/src/fio/fio --version 00:13:52.535 19:14:29 -- common/autotest_common.sh@1301 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:52.535 19:14:29 -- common/autotest_common.sh@1302 -- # echo serialize_overlap=1 00:13:52.535 19:14:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:52.535 19:14:29 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:13:52.535 19:14:29 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:13:52.535 19:14:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:52.535 19:14:29 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:13:52.535 19:14:29 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:13:52.535 19:14:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:52.535 19:14:29 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:13:52.535 19:14:29 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:13:52.535 19:14:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:52.535 19:14:29 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:13:52.535 19:14:29 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:13:52.535 19:14:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:52.535 19:14:29 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:13:52.535 19:14:29 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:13:52.535 19:14:29 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:52.535 19:14:29 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:13:52.535 19:14:29 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:13:52.535 19:14:29 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:52.535 19:14:29 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:52.535 19:14:29 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:13:52.535 19:14:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:52.535 19:14:29 -- common/autotest_common.sh@10 -- # set +x 00:13:52.535 ************************************ 00:13:52.535 START TEST bdev_fio_rw_verify 00:13:52.535 ************************************ 00:13:52.535 19:14:29 -- common/autotest_common.sh@1102 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:52.535 19:14:29 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:52.535 19:14:29 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:13:52.535 19:14:29 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:52.535 19:14:29 -- common/autotest_common.sh@1316 -- # local sanitizers 00:13:52.535 19:14:29 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:52.535 19:14:29 -- common/autotest_common.sh@1318 -- # shift 00:13:52.535 19:14:29 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:13:52.535 19:14:29 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:13:52.535 19:14:29 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:52.535 19:14:29 -- common/autotest_common.sh@1322 -- # grep libasan 00:13:52.535 19:14:29 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:13:52.535 19:14:29 -- common/autotest_common.sh@1322 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:52.535 19:14:29 -- common/autotest_common.sh@1323 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:52.535 19:14:29 -- common/autotest_common.sh@1324 -- # break 00:13:52.535 19:14:29 -- common/autotest_common.sh@1329 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:52.535 19:14:29 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:52.793 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:52.793 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:52.793 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:52.793 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:52.793 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:52.793 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:52.793 fio-3.35 00:13:52.793 Starting 6 threads 00:14:04.989 00:14:04.989 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69671: Wed Feb 14 19:14:40 2024 00:14:04.989 read: IOPS=29.5k, BW=115MiB/s (121MB/s)(1153MiB/10001msec) 00:14:04.989 slat (usec): min=2, max=687, avg= 6.59, stdev= 3.96 00:14:04.989 clat (usec): min=109, max=252568, avg=648.03, stdev=1326.79 00:14:04.989 lat (usec): min=114, max=252574, avg=654.62, stdev=1326.88 00:14:04.989 clat percentiles (usec): 00:14:04.989 | 50.000th=[ 668], 99.000th=[ 1139], 99.900th=[ 1532], 00:14:04.989 | 99.990th=[ 3359], 99.999th=[252707] 00:14:04.989 write: IOPS=29.9k, BW=117MiB/s (122MB/s)(1167MiB/10001msec); 0 zone resets 00:14:04.989 slat (usec): min=14, max=1312, avg=24.29, stdev=22.03 00:14:04.989 clat (usec): min=92, max=3978, avg=709.24, stdev=208.51 00:14:04.989 lat (usec): min=115, max=4004, avg=733.53, stdev=210.19 00:14:04.989 clat percentiles (usec): 00:14:04.989 | 50.000th=[ 725], 99.000th=[ 1254], 99.900th=[ 1663], 99.990th=[ 2245], 00:14:04.989 | 99.999th=[ 3916] 00:14:04.989 bw ( KiB/s): min=90448, max=145384, per=99.77%, avg=119230.63, stdev=2570.30, samples=114 00:14:04.989 iops : min=22612, max=36346, avg=29807.42, stdev=642.58, samples=114 00:14:04.989 lat (usec) : 100=0.01%, 250=2.48%, 500=17.31%, 750=43.75%, 1000=31.82% 00:14:04.989 lat (msec) : 2=4.62%, 4=0.02%, 500=0.01% 00:14:04.989 cpu : usr=63.40%, sys=24.45%, ctx=6636, majf=0, minf=26836 00:14:04.989 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.989 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.989 issued rwts: total=295182,298781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.989 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:04.989 00:14:04.989 Run status group 0 (all jobs): 00:14:04.989 READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=1153MiB (1209MB), run=10001-10001msec 00:14:04.989 WRITE: bw=117MiB/s (122MB/s), 117MiB/s-117MiB/s (122MB/s-122MB/s), io=1167MiB (1224MB), run=10001-10001msec 00:14:04.989 ----------------------------------------------------- 00:14:04.989 Suppressions used: 00:14:04.989 count bytes template 00:14:04.989 6 48 /usr/src/fio/parse.c 00:14:04.989 3404 326784 /usr/src/fio/iolog.c 00:14:04.989 1 8 libtcmalloc_minimal.so 00:14:04.989 1 904 libcrypto.so 00:14:04.989 ----------------------------------------------------- 00:14:04.989 00:14:04.989 00:14:04.989 real 0m12.153s 00:14:04.989 user 0m39.844s 00:14:04.989 sys 0m15.042s 00:14:04.989 19:14:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:04.989 ************************************ 00:14:04.989 END TEST bdev_fio_rw_verify 00:14:04.989 ************************************ 00:14:04.989 19:14:41 -- common/autotest_common.sh@10 -- # set +x 00:14:04.989 19:14:41 -- bdev/blockdev.sh@348 -- # rm -f 00:14:04.989 19:14:41 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:04.989 19:14:41 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:04.989 19:14:41 -- common/autotest_common.sh@1257 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:04.989 19:14:41 -- common/autotest_common.sh@1258 -- # local workload=trim 00:14:04.989 19:14:41 -- common/autotest_common.sh@1259 -- # local bdev_type= 00:14:04.989 19:14:41 -- common/autotest_common.sh@1260 -- # local env_context= 00:14:04.989 19:14:41 -- common/autotest_common.sh@1261 -- # local fio_dir=/usr/src/fio 00:14:04.989 19:14:41 -- common/autotest_common.sh@1263 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:04.989 19:14:41 -- common/autotest_common.sh@1268 -- # '[' -z trim ']' 00:14:04.989 19:14:41 -- common/autotest_common.sh@1272 -- # '[' -n '' ']' 00:14:04.989 19:14:41 -- common/autotest_common.sh@1276 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:04.989 19:14:41 -- common/autotest_common.sh@1278 -- # cat 00:14:04.989 19:14:41 -- common/autotest_common.sh@1290 -- # '[' trim == verify ']' 00:14:04.989 19:14:41 -- common/autotest_common.sh@1305 -- # '[' trim == trim ']' 00:14:04.989 19:14:41 -- common/autotest_common.sh@1306 -- # echo rw=trimwrite 00:14:04.989 19:14:41 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:04.989 19:14:41 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1e002af8-cddf-4d09-ab86-615c5fa664fc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1e002af8-cddf-4d09-ab86-615c5fa664fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "23696e61-fbb9-4ac4-9486-e5ad359115c6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "23696e61-fbb9-4ac4-9486-e5ad359115c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "776f5e33-cc91-4f86-b087-93b02bcc4f52"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "776f5e33-cc91-4f86-b087-93b02bcc4f52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "1e639dfd-c214-4a69-a5eb-3841f789be14"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1e639dfd-c214-4a69-a5eb-3841f789be14",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b1477262-76af-4101-af52-5abf6763a06d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b1477262-76af-4101-af52-5abf6763a06d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "caab780e-cd05-48ba-9227-86fe5307e7e4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "caab780e-cd05-48ba-9227-86fe5307e7e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:04.989 19:14:42 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:14:04.989 19:14:42 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:04.989 19:14:42 -- bdev/blockdev.sh@360 -- # popd 00:14:04.989 /home/vagrant/spdk_repo/spdk 00:14:04.989 19:14:42 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:14:04.989 19:14:42 -- bdev/blockdev.sh@362 -- # return 0 00:14:04.989 00:14:04.989 real 0m12.335s 00:14:04.990 user 0m39.955s 00:14:04.990 sys 0m15.110s 00:14:04.990 19:14:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:04.990 19:14:42 -- common/autotest_common.sh@10 -- # set +x 00:14:04.990 ************************************ 00:14:04.990 END TEST bdev_fio 00:14:04.990 ************************************ 00:14:04.990 19:14:42 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:04.990 19:14:42 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:04.990 19:14:42 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:14:04.990 19:14:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:04.990 19:14:42 -- common/autotest_common.sh@10 -- # set +x 00:14:04.990 ************************************ 00:14:04.990 START TEST bdev_verify 00:14:04.990 ************************************ 00:14:04.990 19:14:42 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:04.990 [2024-02-14 19:14:42.178717] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:04.990 [2024-02-14 19:14:42.178873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69844 ] 00:14:04.990 [2024-02-14 19:14:42.342516] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:05.248 [2024-02-14 19:14:42.570636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.248 [2024-02-14 19:14:42.570646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.248 [2024-02-14 19:14:42.570843] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:14:05.840 Running I/O for 5 seconds... 00:14:11.107 00:14:11.107 Latency(us) 00:14:11.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.107 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:11.107 Verification LBA range: start 0x0 length 0x20000 00:14:11.107 nvme0n1 : 5.06 2405.91 9.40 0.00 0.00 53020.24 16443.58 72447.07 00:14:11.107 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:11.107 Verification LBA range: start 0x20000 length 0x20000 00:14:11.107 nvme0n1 : 5.06 2409.77 9.41 0.00 0.00 52847.49 8340.95 81026.33 00:14:11.107 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:11.107 Verification LBA range: start 0x0 length 0x80000 00:14:11.107 nvme1n1 : 5.07 2538.46 9.92 0.00 0.00 50271.03 4319.42 83409.45 00:14:11.107 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:11.107 Verification LBA range: start 0x80000 length 0x80000 00:14:11.107 nvme1n1 : 5.07 2491.04 9.73 0.00 0.00 51061.79 18945.86 71493.82 00:14:11.107 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:11.107 Verification LBA range: start 0x0 length 0x80000 00:14:11.107 nvme1n2 : 5.06 2412.78 9.42 0.00 0.00 52741.70 4974.78 70540.57 00:14:11.107 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:11.107 Verification LBA range: start 0x80000 length 0x80000 00:14:11.107 nvme1n2 : 5.07 2443.67 9.55 0.00 0.00 51983.11 4468.36 70540.57 00:14:11.107 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:11.107 Verification LBA range: start 0x0 length 0x80000 00:14:11.107 nvme1n3 : 5.07 2458.71 9.60 0.00 0.00 51721.88 4498.15 67204.19 00:14:11.107 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:11.107 Verification LBA range: start 0x80000 length 0x80000 00:14:11.107 nvme1n3 : 5.07 2425.01 9.47 0.00 0.00 52237.57 6672.76 77213.32 00:14:11.107 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:11.107 Verification LBA range: start 0x0 length 0xbd0bd 00:14:11.107 nvme2n1 : 5.07 2955.75 11.55 0.00 0.00 42925.01 7983.48 67680.81 00:14:11.107 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:11.107 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:11.107 nvme2n1 : 5.08 3010.15 11.76 0.00 0.00 42057.26 8460.10 58148.31 00:14:11.107 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:11.107 Verification LBA range: start 0x0 length 0xa0000 00:14:11.107 nvme3n1 : 5.07 2640.50 10.31 0.00 0.00 47950.85 5242.88 63867.81 00:14:11.107 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:11.107 Verification LBA range: start 0xa0000 length 0xa0000 00:14:11.107 nvme3n1 : 5.08 2497.39 9.76 0.00 0.00 50616.63 4140.68 62914.56 00:14:11.107 =================================================================================================================== 00:14:11.107 Total : 30689.14 119.88 0.00 0.00 49669.10 4140.68 83409.45 00:14:11.107 [2024-02-14 19:14:48.120839] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:14:12.040 00:14:12.040 real 0m7.108s 00:14:12.040 user 0m9.262s 00:14:12.040 sys 0m3.211s 00:14:12.040 ************************************ 00:14:12.040 END TEST bdev_verify 00:14:12.040 ************************************ 00:14:12.040 19:14:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:12.040 19:14:49 -- common/autotest_common.sh@10 -- # set +x 00:14:12.040 19:14:49 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:12.040 19:14:49 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:14:12.040 19:14:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:12.040 19:14:49 -- common/autotest_common.sh@10 -- # set +x 00:14:12.041 ************************************ 00:14:12.041 START TEST bdev_verify_big_io 00:14:12.041 ************************************ 00:14:12.041 19:14:49 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:12.041 [2024-02-14 19:14:49.354948] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:12.041 [2024-02-14 19:14:49.355119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69945 ] 00:14:12.299 [2024-02-14 19:14:49.524760] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:12.299 [2024-02-14 19:14:49.695576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.299 [2024-02-14 19:14:49.695596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.299 [2024-02-14 19:14:49.695763] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:14:12.865 Running I/O for 5 seconds... 00:14:19.428 00:14:19.428 Latency(us) 00:14:19.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.428 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:19.428 Verification LBA range: start 0x0 length 0x2000 00:14:19.428 nvme0n1 : 5.62 215.84 13.49 0.00 0.00 568983.32 50760.61 686340.65 00:14:19.428 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:19.428 Verification LBA range: start 0x2000 length 0x2000 00:14:19.428 nvme0n1 : 5.67 245.24 15.33 0.00 0.00 508376.73 68157.44 785478.75 00:14:19.428 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:19.428 Verification LBA range: start 0x0 length 0x8000 00:14:19.428 nvme1n1 : 5.63 262.68 16.42 0.00 0.00 468446.56 52667.11 568137.54 00:14:19.428 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:19.428 Verification LBA range: start 0x8000 length 0x8000 00:14:19.428 nvme1n1 : 5.72 243.17 15.20 0.00 0.00 496801.00 71493.82 709218.68 00:14:19.428 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:19.428 Verification LBA range: start 0x0 length 0x8000 00:14:19.428 nvme1n2 : 5.63 230.42 14.40 0.00 0.00 521282.72 67204.19 610080.58 00:14:19.428 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:19.428 Verification LBA range: start 0x8000 length 0x8000 00:14:19.428 nvme1n2 : 5.72 243.09 15.19 0.00 0.00 482983.06 57671.68 907494.87 00:14:19.428 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:19.428 Verification LBA range: start 0x0 length 0x8000 00:14:19.428 nvme1n3 : 5.68 244.68 15.29 0.00 0.00 483846.19 45041.11 549072.52 00:14:19.428 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:19.428 Verification LBA range: start 0x8000 length 0x8000 00:14:19.428 nvme1n3 : 5.73 242.95 15.18 0.00 0.00 479432.10 47185.92 674901.64 00:14:19.428 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:19.428 Verification LBA range: start 0x0 length 0xbd0b 00:14:19.428 nvme2n1 : 5.68 260.38 16.27 0.00 0.00 440857.42 46232.67 419430.40 00:14:19.428 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:19.428 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:19.428 nvme2n1 : 5.77 271.46 16.97 0.00 0.00 418785.12 35270.28 530007.51 00:14:19.428 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:19.428 Verification LBA range: start 0x0 length 0xa000 00:14:19.428 nvme3n1 : 5.69 275.45 17.22 0.00 0.00 411671.06 2129.92 427056.41 00:14:19.428 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:19.428 Verification LBA range: start 0xa000 length 0xa000 00:14:19.428 nvme3n1 : 5.78 302.65 18.92 0.00 0.00 366500.89 7447.27 398458.88 00:14:19.428 =================================================================================================================== 00:14:19.428 Total : 3038.02 189.88 0.00 0.00 465945.50 2129.92 907494.87 00:14:19.428 [2024-02-14 19:14:56.235781] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:14:19.995 00:14:19.995 real 0m8.051s 00:14:19.995 user 0m14.285s 00:14:19.995 sys 0m0.671s 00:14:19.995 19:14:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:19.995 19:14:57 -- common/autotest_common.sh@10 -- # set +x 00:14:19.995 ************************************ 00:14:19.995 END TEST bdev_verify_big_io 00:14:19.995 ************************************ 00:14:19.995 19:14:57 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:19.995 19:14:57 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:14:19.995 19:14:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:19.995 19:14:57 -- common/autotest_common.sh@10 -- # set +x 00:14:19.995 ************************************ 00:14:19.995 START TEST bdev_write_zeroes 00:14:19.995 ************************************ 00:14:19.995 19:14:57 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:20.253 [2024-02-14 19:14:57.487149] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:20.254 [2024-02-14 19:14:57.487404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70055 ] 00:14:20.254 [2024-02-14 19:14:57.670820] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.512 [2024-02-14 19:14:57.851179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.512 [2024-02-14 19:14:57.851280] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:14:21.079 Running I/O for 1 seconds... 00:14:22.015 00:14:22.015 Latency(us) 00:14:22.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.015 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:22.015 nvme0n1 : 1.01 11968.41 46.75 0.00 0.00 10683.29 6911.07 19184.17 00:14:22.015 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:22.015 nvme1n1 : 1.01 11952.92 46.69 0.00 0.00 10688.49 6911.07 20018.27 00:14:22.015 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:22.015 nvme1n2 : 1.01 11983.78 46.81 0.00 0.00 10650.47 6225.92 20256.58 00:14:22.015 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:22.015 nvme1n3 : 1.02 11968.52 46.75 0.00 0.00 10653.54 6732.33 20375.74 00:14:22.015 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:22.015 nvme2n1 : 1.02 16235.48 63.42 0.00 0.00 7827.84 2710.81 14775.39 00:14:22.015 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:22.015 nvme3n1 : 1.02 11942.57 46.65 0.00 0.00 10624.48 5421.61 16562.73 00:14:22.015 =================================================================================================================== 00:14:22.015 Total : 76051.68 297.08 0.00 0.00 10053.06 2710.81 20375.74 00:14:22.015 [2024-02-14 19:14:59.290220] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:14:23.394 00:14:23.394 real 0m3.009s 00:14:23.394 user 0m2.258s 00:14:23.394 sys 0m0.549s 00:14:23.394 19:15:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:23.394 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:14:23.394 ************************************ 00:14:23.394 END TEST bdev_write_zeroes 00:14:23.394 ************************************ 00:14:23.394 19:15:00 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:23.394 19:15:00 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:14:23.394 19:15:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:23.394 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:14:23.394 ************************************ 00:14:23.394 START TEST bdev_json_nonenclosed 00:14:23.394 ************************************ 00:14:23.394 19:15:00 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:23.394 [2024-02-14 19:15:00.506108] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:23.394 [2024-02-14 19:15:00.506272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70114 ] 00:14:23.394 [2024-02-14 19:15:00.665600] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.652 [2024-02-14 19:15:00.836087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.652 [2024-02-14 19:15:00.836191] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:14:23.652 [2024-02-14 19:15:00.836310] json_config.c: 598:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:23.652 [2024-02-14 19:15:00.836338] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:23.652 [2024-02-14 19:15:00.836352] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:23.652 [2024-02-14 19:15:00.836392] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:14:23.911 00:14:23.911 real 0m0.783s 00:14:23.911 user 0m0.569s 00:14:23.911 sys 0m0.110s 00:14:23.911 19:15:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:23.911 19:15:01 -- common/autotest_common.sh@10 -- # set +x 00:14:23.911 ************************************ 00:14:23.911 END TEST bdev_json_nonenclosed 00:14:23.911 ************************************ 00:14:23.911 19:15:01 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:23.911 19:15:01 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:14:23.911 19:15:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:23.911 19:15:01 -- common/autotest_common.sh@10 -- # set +x 00:14:23.911 ************************************ 00:14:23.911 START TEST bdev_json_nonarray 00:14:23.911 ************************************ 00:14:23.911 19:15:01 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:24.170 [2024-02-14 19:15:01.352140] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:24.170 [2024-02-14 19:15:01.352300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70144 ] 00:14:24.170 [2024-02-14 19:15:01.522542] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.429 [2024-02-14 19:15:01.700496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.429 [2024-02-14 19:15:01.700641] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:14:24.429 [2024-02-14 19:15:01.700799] json_config.c: 604:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:24.429 [2024-02-14 19:15:01.700843] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:24.429 [2024-02-14 19:15:01.700857] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:24.429 [2024-02-14 19:15:01.700896] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:14:24.688 00:14:24.688 real 0m0.813s 00:14:24.688 user 0m0.588s 00:14:24.688 sys 0m0.118s 00:14:24.688 19:15:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:24.688 19:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:24.688 ************************************ 00:14:24.688 END TEST bdev_json_nonarray 00:14:24.688 ************************************ 00:14:24.946 19:15:02 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:14:24.946 19:15:02 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:14:24.946 19:15:02 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:14:24.946 19:15:02 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:24.946 19:15:02 -- bdev/blockdev.sh@809 -- # cleanup 00:14:24.946 19:15:02 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:24.946 19:15:02 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:24.946 19:15:02 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:14:24.946 19:15:02 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:14:24.946 19:15:02 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:14:24.946 19:15:02 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:14:24.946 19:15:02 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:25.901 lsblk: /dev/nvme0c0n1: not a block device 00:14:25.901 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:32.462 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:14:32.462 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:14:32.462 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:14:32.462 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:14:32.462 00:14:32.462 real 1m5.929s 00:14:32.462 user 1m42.414s 00:14:32.462 sys 0m36.945s 00:14:32.462 19:15:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:32.462 19:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.462 ************************************ 00:14:32.462 END TEST blockdev_xnvme 00:14:32.462 ************************************ 00:14:32.462 19:15:09 -- spdk/autotest.sh@259 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:32.462 19:15:09 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:14:32.462 19:15:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:32.462 19:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.462 ************************************ 00:14:32.462 START TEST ublk 00:14:32.462 ************************************ 00:14:32.462 19:15:09 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:32.462 * Looking for test storage... 00:14:32.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:32.462 19:15:09 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:32.462 19:15:09 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:32.462 19:15:09 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:32.462 19:15:09 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:32.462 19:15:09 -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:32.462 19:15:09 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:32.462 19:15:09 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:32.462 19:15:09 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:32.462 19:15:09 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:32.462 19:15:09 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:32.462 19:15:09 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:32.462 19:15:09 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:32.462 19:15:09 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:32.462 19:15:09 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:32.462 19:15:09 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:32.462 19:15:09 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:32.462 19:15:09 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:32.462 19:15:09 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:32.462 19:15:09 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:32.462 19:15:09 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:32.462 19:15:09 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:14:32.462 19:15:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:32.462 19:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.462 ************************************ 00:14:32.462 START TEST test_save_ublk_config 00:14:32.462 ************************************ 00:14:32.462 19:15:09 -- common/autotest_common.sh@1102 -- # test_save_config 00:14:32.462 19:15:09 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:32.462 19:15:09 -- ublk/ublk.sh@103 -- # tgtpid=70498 00:14:32.462 19:15:09 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:32.462 19:15:09 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:32.462 19:15:09 -- ublk/ublk.sh@106 -- # waitforlisten 70498 00:14:32.462 19:15:09 -- common/autotest_common.sh@817 -- # '[' -z 70498 ']' 00:14:32.462 19:15:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.462 19:15:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:32.462 19:15:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.462 19:15:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:32.462 19:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.462 [2024-02-14 19:15:09.682758] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:32.462 [2024-02-14 19:15:09.682909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70498 ] 00:14:32.462 [2024-02-14 19:15:09.856386] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.721 [2024-02-14 19:15:10.106743] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:32.721 [2024-02-14 19:15:10.107029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.096 19:15:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:34.096 19:15:11 -- common/autotest_common.sh@850 -- # return 0 00:14:34.096 19:15:11 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:34.096 19:15:11 -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:34.096 19:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.096 19:15:11 -- common/autotest_common.sh@10 -- # set +x 00:14:34.096 [2024-02-14 19:15:11.372522] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:34.096 malloc0 00:14:34.096 [2024-02-14 19:15:11.445752] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:34.096 [2024-02-14 19:15:11.445866] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:34.096 [2024-02-14 19:15:11.445881] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:34.096 [2024-02-14 19:15:11.445893] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:34.096 [2024-02-14 19:15:11.450332] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:34.096 [2024-02-14 19:15:11.450370] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:34.096 [2024-02-14 19:15:11.453508] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:34.097 [2024-02-14 19:15:11.453648] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:34.097 [2024-02-14 19:15:11.474552] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:34.097 0 00:14:34.097 19:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.097 19:15:11 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:34.097 19:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.097 19:15:11 -- common/autotest_common.sh@10 -- # set +x 00:14:34.355 19:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.355 19:15:11 -- ublk/ublk.sh@115 -- # config='{ 00:14:34.355 "subsystems": [ 00:14:34.355 { 00:14:34.355 "subsystem": "iobuf", 00:14:34.355 "config": [ 00:14:34.355 { 00:14:34.355 "method": "iobuf_set_options", 00:14:34.355 "params": { 00:14:34.355 "small_pool_count": 8192, 00:14:34.355 "large_pool_count": 1024, 00:14:34.355 "small_bufsize": 8192, 00:14:34.355 "large_bufsize": 135168 00:14:34.355 } 00:14:34.355 } 00:14:34.355 ] 00:14:34.355 }, 00:14:34.355 { 00:14:34.355 "subsystem": "sock", 00:14:34.355 "config": [ 00:14:34.355 { 00:14:34.355 "method": "sock_impl_set_options", 00:14:34.355 "params": { 00:14:34.355 "impl_name": "posix", 00:14:34.355 "recv_buf_size": 2097152, 00:14:34.355 "send_buf_size": 2097152, 00:14:34.355 "enable_recv_pipe": true, 00:14:34.355 "enable_quickack": false, 00:14:34.355 "enable_placement_id": 0, 00:14:34.355 "enable_zerocopy_send_server": true, 00:14:34.355 "enable_zerocopy_send_client": false, 00:14:34.355 "zerocopy_threshold": 0, 00:14:34.355 "tls_version": 0, 00:14:34.355 "enable_ktls": false 00:14:34.355 } 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "method": "sock_impl_set_options", 00:14:34.356 "params": { 00:14:34.356 "impl_name": "ssl", 00:14:34.356 "recv_buf_size": 4096, 00:14:34.356 "send_buf_size": 4096, 00:14:34.356 "enable_recv_pipe": true, 00:14:34.356 "enable_quickack": false, 00:14:34.356 "enable_placement_id": 0, 00:14:34.356 "enable_zerocopy_send_server": true, 00:14:34.356 "enable_zerocopy_send_client": false, 00:14:34.356 "zerocopy_threshold": 0, 00:14:34.356 "tls_version": 0, 00:14:34.356 "enable_ktls": false 00:14:34.356 } 00:14:34.356 } 00:14:34.356 ] 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "subsystem": "vmd", 00:14:34.356 "config": [] 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "subsystem": "accel", 00:14:34.356 "config": [ 00:14:34.356 { 00:14:34.356 "method": "accel_set_options", 00:14:34.356 "params": { 00:14:34.356 "small_cache_size": 128, 00:14:34.356 "large_cache_size": 16, 00:14:34.356 "task_count": 2048, 00:14:34.356 "sequence_count": 2048, 00:14:34.356 "buf_count": 2048 00:14:34.356 } 00:14:34.356 } 00:14:34.356 ] 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "subsystem": "bdev", 00:14:34.356 "config": [ 00:14:34.356 { 00:14:34.356 "method": "bdev_set_options", 00:14:34.356 "params": { 00:14:34.356 "bdev_io_pool_size": 65535, 00:14:34.356 "bdev_io_cache_size": 256, 00:14:34.356 "bdev_auto_examine": true, 00:14:34.356 "iobuf_small_cache_size": 128, 00:14:34.356 "iobuf_large_cache_size": 16 00:14:34.356 } 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "method": "bdev_raid_set_options", 00:14:34.356 "params": { 00:14:34.356 "process_window_size_kb": 1024 00:14:34.356 } 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "method": "bdev_iscsi_set_options", 00:14:34.356 "params": { 00:14:34.356 "timeout_sec": 30 00:14:34.356 } 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "method": "bdev_nvme_set_options", 00:14:34.356 "params": { 00:14:34.356 "action_on_timeout": "none", 00:14:34.356 "timeout_us": 0, 00:14:34.356 "timeout_admin_us": 0, 00:14:34.356 "keep_alive_timeout_ms": 10000, 00:14:34.356 "arbitration_burst": 0, 00:14:34.356 "low_priority_weight": 0, 00:14:34.356 "medium_priority_weight": 0, 00:14:34.356 "high_priority_weight": 0, 00:14:34.356 "nvme_adminq_poll_period_us": 10000, 00:14:34.356 "nvme_ioq_poll_period_us": 0, 00:14:34.356 "io_queue_requests": 0, 00:14:34.356 "delay_cmd_submit": true, 00:14:34.356 "transport_retry_count": 4, 00:14:34.356 "bdev_retry_count": 3, 00:14:34.356 "transport_ack_timeout": 0, 00:14:34.356 "ctrlr_loss_timeout_sec": 0, 00:14:34.356 "reconnect_delay_sec": 0, 00:14:34.356 "fast_io_fail_timeout_sec": 0, 00:14:34.356 "disable_auto_failback": false, 00:14:34.356 "generate_uuids": false, 00:14:34.356 "transport_tos": 0, 00:14:34.356 "nvme_error_stat": false, 00:14:34.356 "rdma_srq_size": 0, 00:14:34.356 "io_path_stat": false, 00:14:34.356 "allow_accel_sequence": false, 00:14:34.356 "rdma_max_cq_size": 0, 00:14:34.356 "rdma_cm_event_timeout_ms": 0 00:14:34.356 } 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "method": "bdev_nvme_set_hotplug", 00:14:34.356 "params": { 00:14:34.356 "period_us": 100000, 00:14:34.356 "enable": false 00:14:34.356 } 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "method": "bdev_malloc_create", 00:14:34.356 "params": { 00:14:34.356 "name": "malloc0", 00:14:34.356 "num_blocks": 8192, 00:14:34.356 "block_size": 4096, 00:14:34.356 "physical_block_size": 4096, 00:14:34.356 "uuid": "983e9eb2-2f6f-4282-963f-8bc792e8fe4a", 00:14:34.356 "optimal_io_boundary": 0 00:14:34.356 } 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "method": "bdev_wait_for_examine" 00:14:34.356 } 00:14:34.356 ] 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "subsystem": "scsi", 00:14:34.356 "config": null 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "subsystem": "scheduler", 00:14:34.356 "config": [ 00:14:34.356 { 00:14:34.356 "method": "framework_set_scheduler", 00:14:34.356 "params": { 00:14:34.356 "name": "static" 00:14:34.356 } 00:14:34.356 } 00:14:34.356 ] 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "subsystem": "vhost_scsi", 00:14:34.356 "config": [] 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "subsystem": "vhost_blk", 00:14:34.356 "config": [] 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "subsystem": "ublk", 00:14:34.356 "config": [ 00:14:34.356 { 00:14:34.356 "method": "ublk_create_target", 00:14:34.356 "params": { 00:14:34.356 "cpumask": "1" 00:14:34.356 } 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "method": "ublk_start_disk", 00:14:34.356 "params": { 00:14:34.356 "bdev_name": "malloc0", 00:14:34.356 "ublk_id": 0, 00:14:34.356 "num_queues": 1, 00:14:34.356 "queue_depth": 128 00:14:34.356 } 00:14:34.356 } 00:14:34.356 ] 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "subsystem": "nbd", 00:14:34.356 "config": [] 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "subsystem": "nvmf", 00:14:34.356 "config": [ 00:14:34.356 { 00:14:34.356 "method": "nvmf_set_config", 00:14:34.356 "params": { 00:14:34.356 "discovery_filter": "match_any", 00:14:34.356 "admin_cmd_passthru": { 00:14:34.356 "identify_ctrlr": false 00:14:34.356 } 00:14:34.356 } 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "method": "nvmf_set_max_subsystems", 00:14:34.356 "params": { 00:14:34.356 "max_subsystems": 1024 00:14:34.356 } 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "method": "nvmf_set_crdt", 00:14:34.356 "params": { 00:14:34.356 "crdt1": 0, 00:14:34.356 "crdt2": 0, 00:14:34.356 "crdt3": 0 00:14:34.356 } 00:14:34.356 } 00:14:34.356 ] 00:14:34.356 }, 00:14:34.356 { 00:14:34.356 "subsystem": "iscsi", 00:14:34.356 "config": [ 00:14:34.356 { 00:14:34.356 "method": "iscsi_set_options", 00:14:34.356 "params": { 00:14:34.356 "node_base": "iqn.2016-06.io.spdk", 00:14:34.356 "max_sessions": 128, 00:14:34.356 "max_connections_per_session": 2, 00:14:34.356 "max_queue_depth": 64, 00:14:34.356 "default_time2wait": 2, 00:14:34.356 "default_time2retain": 20, 00:14:34.356 "first_burst_length": 8192, 00:14:34.356 "immediate_data": true, 00:14:34.356 "allow_duplicated_isid": false, 00:14:34.356 "error_recovery_level": 0, 00:14:34.356 "nop_timeout": 60, 00:14:34.356 "nop_in_interval": 30, 00:14:34.356 "disable_chap": false, 00:14:34.356 "require_chap": false, 00:14:34.356 "mutual_chap": false, 00:14:34.356 "chap_group": 0, 00:14:34.356 "max_large_datain_per_connection": 64, 00:14:34.356 "max_r2t_per_connection": 4, 00:14:34.356 "pdu_pool_size": 36864, 00:14:34.356 "immediate_data_pool_size": 16384, 00:14:34.356 "data_out_pool_size": 2048 00:14:34.356 } 00:14:34.356 } 00:14:34.356 ] 00:14:34.356 } 00:14:34.356 ] 00:14:34.356 }' 00:14:34.356 19:15:11 -- ublk/ublk.sh@116 -- # killprocess 70498 00:14:34.356 19:15:11 -- common/autotest_common.sh@924 -- # '[' -z 70498 ']' 00:14:34.356 19:15:11 -- common/autotest_common.sh@928 -- # kill -0 70498 00:14:34.356 19:15:11 -- common/autotest_common.sh@929 -- # uname 00:14:34.356 19:15:11 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:34.356 19:15:11 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 70498 00:14:34.356 19:15:11 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:34.356 killing process with pid 70498 00:14:34.356 19:15:11 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:34.356 19:15:11 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 70498' 00:14:34.356 19:15:11 -- common/autotest_common.sh@943 -- # kill 70498 00:14:34.356 19:15:11 -- common/autotest_common.sh@948 -- # wait 70498 00:14:36.261 [2024-02-14 19:15:13.167792] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:36.261 [2024-02-14 19:15:13.200723] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:36.261 [2024-02-14 19:15:13.200967] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:36.261 [2024-02-14 19:15:13.208531] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:36.261 [2024-02-14 19:15:13.208598] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:36.261 [2024-02-14 19:15:13.208614] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:36.261 [2024-02-14 19:15:13.208648] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:36.261 [2024-02-14 19:15:13.208845] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:37.198 19:15:14 -- ublk/ublk.sh@119 -- # tgtpid=70566 00:14:37.198 19:15:14 -- ublk/ublk.sh@121 -- # waitforlisten 70566 00:14:37.198 19:15:14 -- common/autotest_common.sh@817 -- # '[' -z 70566 ']' 00:14:37.198 19:15:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.198 19:15:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:37.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.198 19:15:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.198 19:15:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:37.198 19:15:14 -- ublk/ublk.sh@118 -- # echo '{ 00:14:37.198 "subsystems": [ 00:14:37.198 { 00:14:37.198 "subsystem": "iobuf", 00:14:37.198 "config": [ 00:14:37.198 { 00:14:37.198 "method": "iobuf_set_options", 00:14:37.198 "params": { 00:14:37.198 "small_pool_count": 8192, 00:14:37.198 "large_pool_count": 1024, 00:14:37.198 "small_bufsize": 8192, 00:14:37.198 "large_bufsize": 135168 00:14:37.198 } 00:14:37.198 } 00:14:37.198 ] 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "subsystem": "sock", 00:14:37.198 "config": [ 00:14:37.198 { 00:14:37.198 "method": "sock_impl_set_options", 00:14:37.198 "params": { 00:14:37.198 "impl_name": "posix", 00:14:37.198 "recv_buf_size": 2097152, 00:14:37.198 "send_buf_size": 2097152, 00:14:37.198 "enable_recv_pipe": true, 00:14:37.198 "enable_quickack": false, 00:14:37.198 "enable_placement_id": 0, 00:14:37.198 "enable_zerocopy_send_server": true, 00:14:37.198 "enable_zerocopy_send_client": false, 00:14:37.198 "zerocopy_threshold": 0, 00:14:37.198 "tls_version": 0, 00:14:37.198 "enable_ktls": false 00:14:37.198 } 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "method": "sock_impl_set_options", 00:14:37.198 "params": { 00:14:37.198 "impl_name": "ssl", 00:14:37.198 "recv_buf_size": 4096, 00:14:37.198 "send_buf_size": 4096, 00:14:37.198 "enable_recv_pipe": true, 00:14:37.198 "enable_quickack": false, 00:14:37.198 "enable_placement_id": 0, 00:14:37.198 "enable_zerocopy_send_server": true, 00:14:37.198 "enable_zerocopy_send_client": false, 00:14:37.198 "zerocopy_threshold": 0, 00:14:37.198 "tls_version": 0, 00:14:37.198 "enable_ktls": false 00:14:37.198 } 00:14:37.198 } 00:14:37.198 ] 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "subsystem": "vmd", 00:14:37.198 "config": [] 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "subsystem": "accel", 00:14:37.198 "config": [ 00:14:37.198 { 00:14:37.198 "method": "accel_set_options", 00:14:37.198 "params": { 00:14:37.198 "small_cache_size": 128, 00:14:37.198 "large_cache_size": 16, 00:14:37.198 "task_count": 2048, 00:14:37.198 "sequence_count": 2048, 00:14:37.198 "buf_count": 2048 00:14:37.198 } 00:14:37.198 } 00:14:37.198 ] 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "subsystem": "bdev", 00:14:37.198 "config": [ 00:14:37.198 { 00:14:37.198 "method": "bdev_set_options", 00:14:37.198 "params": { 00:14:37.198 "bdev_io_pool_size": 65535, 00:14:37.198 "bdev_io_cache_size": 256, 00:14:37.198 "bdev_auto_examine": true, 00:14:37.198 "iobuf_small_cache_size": 128, 00:14:37.198 "iobuf_large_cache_size": 16 00:14:37.198 } 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "method": "bdev_raid_set_options", 00:14:37.198 "params": { 00:14:37.198 "process_window_size_kb": 1024 00:14:37.198 } 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "method": "bdev_iscsi_set_options", 00:14:37.198 "params": { 00:14:37.198 "timeout_sec": 30 00:14:37.198 } 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "method": "bdev_nvme_set_options", 00:14:37.198 "params": { 00:14:37.198 "action_on_timeout": "none", 00:14:37.198 "timeout_us": 0, 00:14:37.198 "timeout_admin_us": 0, 00:14:37.198 "keep_alive_timeout_ms": 10000, 00:14:37.198 "arbitration_burst": 0, 00:14:37.198 "low_priority_weight": 0, 00:14:37.198 "medium_priority_weight": 0, 00:14:37.198 "high_priority_weight": 0, 00:14:37.198 "nvme_adminq_poll_period_us": 10000, 00:14:37.198 "nvme_ioq_poll_period_us": 0, 00:14:37.198 "io_queue_requests": 0, 00:14:37.198 "delay_cmd_submit": true, 00:14:37.198 "transport_retry_count": 4, 00:14:37.198 "bdev_retry_count": 3, 00:14:37.198 "transport_ack_timeout": 0, 00:14:37.198 "ctrlr_loss_timeout_sec": 0, 00:14:37.198 "reconnect_delay_sec": 0, 00:14:37.198 "fast_io_fail_timeout_sec": 0, 00:14:37.198 "disable_auto_failback": false, 00:14:37.198 "generate_uuids": false, 00:14:37.198 "transport_tos": 0, 00:14:37.198 "nvme_error_stat": false, 00:14:37.198 "rdma_srq_size": 0, 00:14:37.198 "io_path_stat": false, 00:14:37.198 "allow_accel_sequence": false, 00:14:37.198 "rdma_max_cq_size": 0, 00:14:37.198 "rdma_cm_event_timeout_ms": 0 00:14:37.198 } 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "method": "bdev_nvme_set_hotplug", 00:14:37.198 "params": { 00:14:37.198 "period_us": 100000, 00:14:37.198 "enable": false 00:14:37.198 } 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "method": "bdev_malloc_create", 00:14:37.198 "params": { 00:14:37.198 "name": "malloc0", 00:14:37.198 "num_blocks": 8192, 00:14:37.198 "block_size": 4096, 00:14:37.198 "physical_block_size": 4096, 00:14:37.198 "uuid": "983e9eb2-2f6f-4282-963f-8bc792e8fe4a", 00:14:37.198 "optimal_io_boundary": 0 00:14:37.198 } 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "method": "bdev_wait_for_examine" 00:14:37.198 } 00:14:37.198 ] 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "subsystem": "scsi", 00:14:37.198 "config": null 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "subsystem": "scheduler", 00:14:37.198 "config": [ 00:14:37.198 { 00:14:37.198 "method": "framework_set_scheduler", 00:14:37.198 "params": { 00:14:37.198 "name": "static" 00:14:37.198 } 00:14:37.198 } 00:14:37.198 ] 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "subsystem": "vhost_scsi", 00:14:37.198 "config": [] 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "subsystem": "vhost_blk", 00:14:37.198 "config": [] 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "subsystem": "ublk", 00:14:37.198 "config": [ 00:14:37.198 { 00:14:37.198 "method": "ublk_create_target", 00:14:37.198 "params": { 00:14:37.198 "cpumask": "1" 00:14:37.198 } 00:14:37.198 }, 00:14:37.198 { 00:14:37.198 "method": "ublk_start_disk", 00:14:37.199 "params": { 00:14:37.199 "bdev_name": "malloc0", 00:14:37.199 "ublk_id": 0, 00:14:37.199 "num_queues": 1, 00:14:37.199 "queue_depth": 128 00:14:37.199 } 00:14:37.199 } 00:14:37.199 ] 00:14:37.199 }, 00:14:37.199 { 00:14:37.199 "subsystem": "nbd", 00:14:37.199 "config": [] 00:14:37.199 }, 00:14:37.199 { 00:14:37.199 "subsystem": "nvmf", 00:14:37.199 "config": [ 00:14:37.199 { 00:14:37.199 "method": "nvmf_set_config", 00:14:37.199 "params": { 00:14:37.199 "discovery_filter": "match_any", 00:14:37.199 "admin_cmd_passthru": { 00:14:37.199 "identify_ctrlr": false 00:14:37.199 } 00:14:37.199 } 00:14:37.199 }, 00:14:37.199 { 00:14:37.199 "method": "nvmf_set_max_subsystems", 00:14:37.199 "params": { 00:14:37.199 "max_subsystems": 1024 00:14:37.199 } 00:14:37.199 }, 00:14:37.199 { 00:14:37.199 "method": "nvmf_set_crdt", 00:14:37.199 "params": { 00:14:37.199 "crdt1": 0, 00:14:37.199 "crdt2": 0, 00:14:37.199 "crdt3": 0 00:14:37.199 } 00:14:37.199 } 00:14:37.199 ] 00:14:37.199 }, 00:14:37.199 { 00:14:37.199 "subsystem": "iscsi", 00:14:37.199 "config": [ 00:14:37.199 { 00:14:37.199 "method": "iscsi_set_options", 00:14:37.199 "params": { 00:14:37.199 "node_base": "iqn.2016-06.io.spdk", 00:14:37.199 "max_sessions": 128, 00:14:37.199 "max_connections_per_session": 2, 00:14:37.199 "max_queue_depth": 64, 00:14:37.199 "default_time2wait": 2, 00:14:37.199 "default_time2retain": 20, 00:14:37.199 "first_burst_length": 8192, 00:14:37.199 "immediate_data": true, 00:14:37.199 "allow_duplicated_isid": false, 00:14:37.199 "error_recovery_level": 0, 00:14:37.199 "nop_timeout": 60, 00:14:37.199 "nop_in_interval": 30, 00:14:37.199 "disable_chap": false, 00:14:37.199 "require_chap": false, 00:14:37.199 "mutual_chap": false, 00:14:37.199 "chap_group": 0, 00:14:37.199 "max_large_datain_per_connection": 64, 00:14:37.199 "max_r2t_per_connection": 4, 00:14:37.199 "pdu_pool_size": 36864, 00:14:37.199 "immediate_data_pool_size": 16384, 00:14:37.199 "data_out_pool_size": 2048 00:14:37.199 } 00:14:37.199 } 00:14:37.199 ] 00:14:37.199 } 00:14:37.199 ] 00:14:37.199 }' 00:14:37.199 19:15:14 -- common/autotest_common.sh@10 -- # set +x 00:14:37.199 19:15:14 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:37.199 [2024-02-14 19:15:14.522963] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:37.199 [2024-02-14 19:15:14.523163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70566 ] 00:14:37.458 [2024-02-14 19:15:14.695061] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.717 [2024-02-14 19:15:14.894887] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:37.717 [2024-02-14 19:15:14.895099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.717 [2024-02-14 19:15:14.895155] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:14:38.652 [2024-02-14 19:15:15.851516] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:38.652 [2024-02-14 19:15:15.858749] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:38.652 [2024-02-14 19:15:15.858846] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:38.652 [2024-02-14 19:15:15.858862] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:38.652 [2024-02-14 19:15:15.858871] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:38.652 [2024-02-14 19:15:15.866565] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:38.652 [2024-02-14 19:15:15.866597] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:38.652 [2024-02-14 19:15:15.872654] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:38.652 [2024-02-14 19:15:15.872784] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:38.652 [2024-02-14 19:15:15.889633] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:38.910 19:15:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:38.910 19:15:16 -- common/autotest_common.sh@850 -- # return 0 00:14:38.910 19:15:16 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:38.910 19:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.910 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:14:38.910 19:15:16 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:38.910 19:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.910 19:15:16 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:38.910 19:15:16 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:38.910 19:15:16 -- ublk/ublk.sh@125 -- # killprocess 70566 00:14:38.910 19:15:16 -- common/autotest_common.sh@924 -- # '[' -z 70566 ']' 00:14:38.910 19:15:16 -- common/autotest_common.sh@928 -- # kill -0 70566 00:14:38.910 19:15:16 -- common/autotest_common.sh@929 -- # uname 00:14:38.910 19:15:16 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:38.910 19:15:16 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 70566 00:14:38.910 19:15:16 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:38.910 killing process with pid 70566 00:14:38.910 19:15:16 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:38.910 19:15:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 70566' 00:14:38.910 19:15:16 -- common/autotest_common.sh@943 -- # kill 70566 00:14:38.910 19:15:16 -- common/autotest_common.sh@948 -- # wait 70566 00:14:38.910 [2024-02-14 19:15:16.299605] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:14:40.285 [2024-02-14 19:15:17.534310] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:40.285 [2024-02-14 19:15:17.570649] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:40.285 [2024-02-14 19:15:17.570854] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:40.285 [2024-02-14 19:15:17.578618] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:40.285 [2024-02-14 19:15:17.578681] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:40.285 [2024-02-14 19:15:17.578694] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:40.285 [2024-02-14 19:15:17.578725] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:40.285 [2024-02-14 19:15:17.578978] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:41.662 19:15:18 -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:41.662 00:14:41.662 real 0m9.202s 00:14:41.662 user 0m8.441s 00:14:41.662 sys 0m2.159s 00:14:41.663 19:15:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.663 ************************************ 00:14:41.663 19:15:18 -- common/autotest_common.sh@10 -- # set +x 00:14:41.663 END TEST test_save_ublk_config 00:14:41.663 ************************************ 00:14:41.663 19:15:18 -- ublk/ublk.sh@139 -- # spdk_pid=70647 00:14:41.663 19:15:18 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:41.663 19:15:18 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:41.663 19:15:18 -- ublk/ublk.sh@141 -- # waitforlisten 70647 00:14:41.663 19:15:18 -- common/autotest_common.sh@817 -- # '[' -z 70647 ']' 00:14:41.663 19:15:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.663 19:15:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:41.663 19:15:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.663 19:15:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:41.663 19:15:18 -- common/autotest_common.sh@10 -- # set +x 00:14:41.663 [2024-02-14 19:15:18.924379] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:41.663 [2024-02-14 19:15:18.924536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70647 ] 00:14:41.922 [2024-02-14 19:15:19.090792] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:41.922 [2024-02-14 19:15:19.323870] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:41.922 [2024-02-14 19:15:19.324367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.922 [2024-02-14 19:15:19.324370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.300 19:15:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.300 19:15:20 -- common/autotest_common.sh@850 -- # return 0 00:14:43.300 19:15:20 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:43.300 19:15:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:14:43.300 19:15:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:43.300 19:15:20 -- common/autotest_common.sh@10 -- # set +x 00:14:43.300 ************************************ 00:14:43.300 START TEST test_create_ublk 00:14:43.300 ************************************ 00:14:43.300 19:15:20 -- common/autotest_common.sh@1102 -- # test_create_ublk 00:14:43.300 19:15:20 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:43.300 19:15:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.300 19:15:20 -- common/autotest_common.sh@10 -- # set +x 00:14:43.300 [2024-02-14 19:15:20.661065] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:43.300 19:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.300 19:15:20 -- ublk/ublk.sh@33 -- # ublk_target= 00:14:43.300 19:15:20 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:43.300 19:15:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.300 19:15:20 -- common/autotest_common.sh@10 -- # set +x 00:14:43.559 19:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.560 19:15:20 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:43.560 19:15:20 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:43.560 19:15:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.560 19:15:20 -- common/autotest_common.sh@10 -- # set +x 00:14:43.560 [2024-02-14 19:15:20.916708] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:43.560 [2024-02-14 19:15:20.917241] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:43.560 [2024-02-14 19:15:20.917265] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:43.560 [2024-02-14 19:15:20.917279] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:43.560 [2024-02-14 19:15:20.924874] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:43.560 [2024-02-14 19:15:20.924909] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:43.560 [2024-02-14 19:15:20.931543] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:43.560 [2024-02-14 19:15:20.941861] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:43.560 [2024-02-14 19:15:20.956662] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:43.560 19:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.560 19:15:20 -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:43.560 19:15:20 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:43.560 19:15:20 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:43.560 19:15:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.560 19:15:20 -- common/autotest_common.sh@10 -- # set +x 00:14:43.819 19:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.819 19:15:20 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:43.819 { 00:14:43.819 "ublk_device": "/dev/ublkb0", 00:14:43.819 "id": 0, 00:14:43.819 "queue_depth": 512, 00:14:43.819 "num_queues": 4, 00:14:43.819 "bdev_name": "Malloc0" 00:14:43.819 } 00:14:43.819 ]' 00:14:43.819 19:15:20 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:43.819 19:15:21 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:43.819 19:15:21 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:43.819 19:15:21 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:43.819 19:15:21 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:43.819 19:15:21 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:43.819 19:15:21 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:43.819 19:15:21 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:43.819 19:15:21 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:44.078 19:15:21 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:44.078 19:15:21 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:44.078 19:15:21 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:44.078 19:15:21 -- lvol/common.sh@41 -- # local offset=0 00:14:44.078 19:15:21 -- lvol/common.sh@42 -- # local size=134217728 00:14:44.078 19:15:21 -- lvol/common.sh@43 -- # local rw=write 00:14:44.078 19:15:21 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:44.078 19:15:21 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:44.078 19:15:21 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:44.078 19:15:21 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:44.078 19:15:21 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:44.078 19:15:21 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:44.078 19:15:21 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:44.078 fio: verification read phase will never start because write phase uses all of runtime 00:14:44.078 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:44.078 fio-3.35 00:14:44.078 Starting 1 process 00:14:56.286 00:14:56.287 fio_test: (groupid=0, jobs=1): err= 0: pid=70701: Wed Feb 14 19:15:31 2024 00:14:56.287 write: IOPS=10.7k, BW=41.7MiB/s (43.8MB/s)(417MiB/10001msec); 0 zone resets 00:14:56.287 clat (usec): min=62, max=7950, avg=92.24, stdev=161.31 00:14:56.287 lat (usec): min=63, max=7951, avg=92.97, stdev=161.34 00:14:56.287 clat percentiles (usec): 00:14:56.287 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 76], 00:14:56.287 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 81], 00:14:56.287 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 99], 95.00th=[ 109], 00:14:56.287 | 99.00th=[ 130], 99.50th=[ 151], 99.90th=[ 3294], 99.95th=[ 3621], 00:14:56.287 | 99.99th=[ 4080] 00:14:56.287 bw ( KiB/s): min=18968, max=45200, per=99.98%, avg=42736.84, stdev=5824.99, samples=19 00:14:56.287 iops : min= 4742, max=11300, avg=10684.21, stdev=1456.25, samples=19 00:14:56.287 lat (usec) : 100=90.49%, 250=9.11%, 500=0.01%, 750=0.01%, 1000=0.03% 00:14:56.287 lat (msec) : 2=0.11%, 4=0.22%, 10=0.01% 00:14:56.287 cpu : usr=2.91%, sys=7.39%, ctx=106875, majf=0, minf=797 00:14:56.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:56.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.287 issued rwts: total=0,106875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:56.287 00:14:56.287 Run status group 0 (all jobs): 00:14:56.287 WRITE: bw=41.7MiB/s (43.8MB/s), 41.7MiB/s-41.7MiB/s (43.8MB/s-43.8MB/s), io=417MiB (438MB), run=10001-10001msec 00:14:56.287 00:14:56.287 Disk stats (read/write): 00:14:56.287 ublkb0: ios=0/105728, merge=0/0, ticks=0/8922, in_queue=8923, util=99.07% 00:14:56.287 19:15:31 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:56.287 19:15:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.287 19:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 [2024-02-14 19:15:31.500889] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:56.287 [2024-02-14 19:15:31.530675] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:56.287 [2024-02-14 19:15:31.534912] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:56.287 [2024-02-14 19:15:31.541585] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:56.287 [2024-02-14 19:15:31.541961] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:56.287 [2024-02-14 19:15:31.542007] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:56.287 19:15:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.287 19:15:31 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:56.287 19:15:31 -- common/autotest_common.sh@638 -- # local es=0 00:14:56.287 19:15:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:56.287 19:15:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:14:56.287 19:15:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:56.287 19:15:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:14:56.287 19:15:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:56.287 19:15:31 -- common/autotest_common.sh@641 -- # rpc_cmd ublk_stop_disk 0 00:14:56.287 19:15:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.287 19:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 [2024-02-14 19:15:31.547961] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:56.287 request: 00:14:56.287 { 00:14:56.287 "ublk_id": 0, 00:14:56.287 "method": "ublk_stop_disk", 00:14:56.287 "req_id": 1 00:14:56.287 } 00:14:56.287 Got JSON-RPC error response 00:14:56.287 response: 00:14:56.287 { 00:14:56.287 "code": -19, 00:14:56.287 "message": "No such device" 00:14:56.287 } 00:14:56.287 19:15:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:14:56.287 19:15:31 -- common/autotest_common.sh@641 -- # es=1 00:14:56.287 19:15:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:56.287 19:15:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:56.287 19:15:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:56.287 19:15:31 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:56.287 19:15:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.287 19:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 [2024-02-14 19:15:31.564680] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:56.287 [2024-02-14 19:15:31.572602] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:56.287 [2024-02-14 19:15:31.572664] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:56.287 19:15:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.287 19:15:31 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:56.287 19:15:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.287 19:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 19:15:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.287 19:15:31 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:56.287 19:15:31 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:56.287 19:15:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.287 19:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 19:15:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.287 19:15:31 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:56.287 19:15:31 -- lvol/common.sh@26 -- # jq length 00:14:56.287 19:15:31 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:56.287 19:15:31 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:56.287 19:15:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.287 19:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 19:15:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.287 19:15:31 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:56.287 19:15:31 -- lvol/common.sh@28 -- # jq length 00:14:56.287 19:15:31 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:56.287 00:14:56.287 real 0m11.346s 00:14:56.287 user 0m0.733s 00:14:56.287 sys 0m0.834s 00:14:56.287 19:15:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:56.287 19:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 ************************************ 00:14:56.287 END TEST test_create_ublk 00:14:56.287 ************************************ 00:14:56.287 19:15:32 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:56.287 19:15:32 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:14:56.287 19:15:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:56.287 19:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 ************************************ 00:14:56.287 START TEST test_create_multi_ublk 00:14:56.287 ************************************ 00:14:56.287 19:15:32 -- common/autotest_common.sh@1102 -- # test_create_multi_ublk 00:14:56.287 19:15:32 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:56.287 19:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.287 19:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 [2024-02-14 19:15:32.058897] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:56.287 19:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.287 19:15:32 -- ublk/ublk.sh@62 -- # ublk_target= 00:14:56.287 19:15:32 -- ublk/ublk.sh@64 -- # seq 0 3 00:14:56.287 19:15:32 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:56.287 19:15:32 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:56.287 19:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.287 19:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 19:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.287 19:15:32 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:56.287 19:15:32 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:56.287 19:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.287 19:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 [2024-02-14 19:15:32.303740] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:56.287 [2024-02-14 19:15:32.304252] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:56.287 [2024-02-14 19:15:32.304275] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:56.287 [2024-02-14 19:15:32.304288] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:56.287 [2024-02-14 19:15:32.311902] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:56.287 [2024-02-14 19:15:32.311935] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:56.287 [2024-02-14 19:15:32.319597] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:56.287 [2024-02-14 19:15:32.320384] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:56.287 [2024-02-14 19:15:32.335701] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:56.287 19:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.287 19:15:32 -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:56.287 19:15:32 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:56.287 19:15:32 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:56.287 19:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.287 19:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 19:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.287 19:15:32 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:56.287 19:15:32 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:56.287 19:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.287 19:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 [2024-02-14 19:15:32.589740] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:56.287 [2024-02-14 19:15:32.590235] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:56.287 [2024-02-14 19:15:32.590261] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:56.287 [2024-02-14 19:15:32.590272] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:56.287 [2024-02-14 19:15:32.597563] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:56.287 [2024-02-14 19:15:32.597592] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:56.287 [2024-02-14 19:15:32.603592] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:56.287 [2024-02-14 19:15:32.604309] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:56.288 [2024-02-14 19:15:32.619593] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:56.288 19:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.288 19:15:32 -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:56.288 19:15:32 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:56.288 19:15:32 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:56.288 19:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.288 19:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:56.288 19:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.288 19:15:32 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:56.288 19:15:32 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:56.288 19:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.288 19:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:56.288 [2024-02-14 19:15:32.880708] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:56.288 [2024-02-14 19:15:32.881195] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:56.288 [2024-02-14 19:15:32.881219] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:56.288 [2024-02-14 19:15:32.881234] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:56.288 [2024-02-14 19:15:32.888564] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:56.288 [2024-02-14 19:15:32.888600] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:56.288 [2024-02-14 19:15:32.902589] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:56.288 [2024-02-14 19:15:32.903352] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:56.288 [2024-02-14 19:15:32.921634] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:56.288 19:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.288 19:15:32 -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:56.288 19:15:32 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:56.288 19:15:32 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:56.288 19:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.288 19:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:56.288 19:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.288 19:15:33 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:56.288 19:15:33 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:56.288 19:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.288 19:15:33 -- common/autotest_common.sh@10 -- # set +x 00:14:56.288 [2024-02-14 19:15:33.175693] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:56.288 [2024-02-14 19:15:33.176164] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:56.288 [2024-02-14 19:15:33.176190] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:56.288 [2024-02-14 19:15:33.176201] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:56.288 [2024-02-14 19:15:33.183558] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:56.288 [2024-02-14 19:15:33.183588] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:56.288 [2024-02-14 19:15:33.190517] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:56.288 [2024-02-14 19:15:33.191246] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:56.288 [2024-02-14 19:15:33.198551] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:56.288 19:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.288 19:15:33 -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:56.288 19:15:33 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:56.288 19:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.288 19:15:33 -- common/autotest_common.sh@10 -- # set +x 00:14:56.288 19:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.288 19:15:33 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:56.288 { 00:14:56.288 "ublk_device": "/dev/ublkb0", 00:14:56.288 "id": 0, 00:14:56.288 "queue_depth": 512, 00:14:56.288 "num_queues": 4, 00:14:56.288 "bdev_name": "Malloc0" 00:14:56.288 }, 00:14:56.288 { 00:14:56.288 "ublk_device": "/dev/ublkb1", 00:14:56.288 "id": 1, 00:14:56.288 "queue_depth": 512, 00:14:56.288 "num_queues": 4, 00:14:56.288 "bdev_name": "Malloc1" 00:14:56.288 }, 00:14:56.288 { 00:14:56.288 "ublk_device": "/dev/ublkb2", 00:14:56.288 "id": 2, 00:14:56.288 "queue_depth": 512, 00:14:56.288 "num_queues": 4, 00:14:56.288 "bdev_name": "Malloc2" 00:14:56.288 }, 00:14:56.288 { 00:14:56.288 "ublk_device": "/dev/ublkb3", 00:14:56.288 "id": 3, 00:14:56.288 "queue_depth": 512, 00:14:56.288 "num_queues": 4, 00:14:56.288 "bdev_name": "Malloc3" 00:14:56.288 } 00:14:56.288 ]' 00:14:56.288 19:15:33 -- ublk/ublk.sh@72 -- # seq 0 3 00:14:56.288 19:15:33 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:56.288 19:15:33 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:56.288 19:15:33 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:56.288 19:15:33 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:56.288 19:15:33 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:56.288 19:15:33 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:56.288 19:15:33 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:56.288 19:15:33 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:56.288 19:15:33 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:56.288 19:15:33 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:56.288 19:15:33 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:56.288 19:15:33 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:56.288 19:15:33 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:56.288 19:15:33 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:56.288 19:15:33 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:56.288 19:15:33 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:56.288 19:15:33 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:56.288 19:15:33 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:56.288 19:15:33 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:56.547 19:15:33 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:56.547 19:15:33 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:56.547 19:15:33 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:56.547 19:15:33 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:56.547 19:15:33 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:56.547 19:15:33 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:56.547 19:15:33 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:56.547 19:15:33 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:56.547 19:15:33 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:56.547 19:15:33 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:56.547 19:15:33 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:56.547 19:15:33 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:56.806 19:15:33 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:56.806 19:15:34 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:56.806 19:15:34 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:56.806 19:15:34 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:56.806 19:15:34 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:56.806 19:15:34 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:56.806 19:15:34 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:56.806 19:15:34 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:56.806 19:15:34 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:56.806 19:15:34 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:56.806 19:15:34 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:56.806 19:15:34 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:57.065 19:15:34 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:57.065 19:15:34 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:57.065 19:15:34 -- ublk/ublk.sh@85 -- # seq 0 3 00:14:57.065 19:15:34 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.065 19:15:34 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:57.065 19:15:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:57.065 19:15:34 -- common/autotest_common.sh@10 -- # set +x 00:14:57.065 [2024-02-14 19:15:34.273817] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:57.065 [2024-02-14 19:15:34.306086] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:57.065 [2024-02-14 19:15:34.308907] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:57.065 [2024-02-14 19:15:34.314539] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:57.065 [2024-02-14 19:15:34.314958] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:57.065 [2024-02-14 19:15:34.315003] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:57.065 19:15:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:57.065 19:15:34 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.065 19:15:34 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:57.065 19:15:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:57.065 19:15:34 -- common/autotest_common.sh@10 -- # set +x 00:14:57.065 [2024-02-14 19:15:34.321775] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:57.065 [2024-02-14 19:15:34.351024] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:57.065 [2024-02-14 19:15:34.353880] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:57.065 [2024-02-14 19:15:34.359535] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:57.065 [2024-02-14 19:15:34.359953] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:57.065 [2024-02-14 19:15:34.359994] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:57.065 19:15:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:57.065 19:15:34 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.065 19:15:34 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:57.065 19:15:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:57.065 19:15:34 -- common/autotest_common.sh@10 -- # set +x 00:14:57.065 [2024-02-14 19:15:34.367666] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:57.065 [2024-02-14 19:15:34.398629] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:57.065 [2024-02-14 19:15:34.399990] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:57.065 [2024-02-14 19:15:34.406727] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:57.065 [2024-02-14 19:15:34.407081] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:57.065 [2024-02-14 19:15:34.407124] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:57.065 19:15:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:57.065 19:15:34 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.065 19:15:34 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:57.065 19:15:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:57.065 19:15:34 -- common/autotest_common.sh@10 -- # set +x 00:14:57.065 [2024-02-14 19:15:34.421710] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:57.065 [2024-02-14 19:15:34.455026] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:57.065 [2024-02-14 19:15:34.456366] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:57.065 [2024-02-14 19:15:34.460575] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:57.065 [2024-02-14 19:15:34.460950] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:57.065 [2024-02-14 19:15:34.460985] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:57.065 19:15:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:57.065 19:15:34 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:57.324 [2024-02-14 19:15:34.717724] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:57.324 [2024-02-14 19:15:34.727504] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:57.324 [2024-02-14 19:15:34.727575] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:57.583 19:15:34 -- ublk/ublk.sh@93 -- # seq 0 3 00:14:57.583 19:15:34 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.583 19:15:34 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:57.583 19:15:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:57.583 19:15:34 -- common/autotest_common.sh@10 -- # set +x 00:14:57.841 19:15:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:57.841 19:15:35 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.841 19:15:35 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:57.841 19:15:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:57.841 19:15:35 -- common/autotest_common.sh@10 -- # set +x 00:14:58.099 19:15:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.099 19:15:35 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:58.099 19:15:35 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:58.099 19:15:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.099 19:15:35 -- common/autotest_common.sh@10 -- # set +x 00:14:58.357 19:15:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.357 19:15:35 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:58.357 19:15:35 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:58.357 19:15:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.357 19:15:35 -- common/autotest_common.sh@10 -- # set +x 00:14:58.615 19:15:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.615 19:15:35 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:58.615 19:15:35 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:58.615 19:15:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.615 19:15:35 -- common/autotest_common.sh@10 -- # set +x 00:14:58.615 19:15:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.615 19:15:36 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:58.615 19:15:36 -- lvol/common.sh@26 -- # jq length 00:14:58.875 19:15:36 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:58.875 19:15:36 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:58.875 19:15:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.875 19:15:36 -- common/autotest_common.sh@10 -- # set +x 00:14:58.875 19:15:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.875 19:15:36 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:58.875 19:15:36 -- lvol/common.sh@28 -- # jq length 00:14:58.875 19:15:36 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:58.875 00:14:58.875 real 0m4.073s 00:14:58.875 user 0m1.332s 00:14:58.875 sys 0m0.132s 00:14:58.875 19:15:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:58.875 19:15:36 -- common/autotest_common.sh@10 -- # set +x 00:14:58.875 ************************************ 00:14:58.875 END TEST test_create_multi_ublk 00:14:58.875 ************************************ 00:14:58.875 19:15:36 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:58.875 19:15:36 -- ublk/ublk.sh@147 -- # cleanup 00:14:58.875 19:15:36 -- ublk/ublk.sh@130 -- # killprocess 70647 00:14:58.875 19:15:36 -- common/autotest_common.sh@924 -- # '[' -z 70647 ']' 00:14:58.875 19:15:36 -- common/autotest_common.sh@928 -- # kill -0 70647 00:14:58.875 19:15:36 -- common/autotest_common.sh@929 -- # uname 00:14:58.875 19:15:36 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:58.875 19:15:36 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 70647 00:14:58.875 killing process with pid 70647 00:14:58.875 19:15:36 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:58.875 19:15:36 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:58.875 19:15:36 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 70647' 00:14:58.875 19:15:36 -- common/autotest_common.sh@943 -- # kill 70647 00:14:58.875 19:15:36 -- common/autotest_common.sh@948 -- # wait 70647 00:14:59.812 [2024-02-14 19:15:37.146461] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:59.812 [2024-02-14 19:15:37.146519] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:01.188 00:15:01.188 real 0m28.780s 00:15:01.188 user 0m44.041s 00:15:01.188 sys 0m8.256s 00:15:01.188 19:15:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:01.188 ************************************ 00:15:01.188 END TEST ublk 00:15:01.188 ************************************ 00:15:01.188 19:15:38 -- common/autotest_common.sh@10 -- # set +x 00:15:01.188 19:15:38 -- spdk/autotest.sh@260 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:01.188 19:15:38 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:15:01.188 19:15:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:01.188 19:15:38 -- common/autotest_common.sh@10 -- # set +x 00:15:01.188 ************************************ 00:15:01.188 START TEST ublk_recovery 00:15:01.188 ************************************ 00:15:01.188 19:15:38 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:01.188 * Looking for test storage... 00:15:01.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:01.188 19:15:38 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:01.188 19:15:38 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:01.188 19:15:38 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:01.188 19:15:38 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:01.188 19:15:38 -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:01.188 19:15:38 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:01.188 19:15:38 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:01.188 19:15:38 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:01.188 19:15:38 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:01.188 19:15:38 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:01.188 19:15:38 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71038 00:15:01.188 19:15:38 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:01.188 19:15:38 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71038 00:15:01.188 19:15:38 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:01.188 19:15:38 -- common/autotest_common.sh@817 -- # '[' -z 71038 ']' 00:15:01.188 19:15:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.188 19:15:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:01.188 19:15:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.188 19:15:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:01.188 19:15:38 -- common/autotest_common.sh@10 -- # set +x 00:15:01.188 [2024-02-14 19:15:38.484313] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:01.188 [2024-02-14 19:15:38.484544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71038 ] 00:15:01.447 [2024-02-14 19:15:38.655655] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:01.447 [2024-02-14 19:15:38.827378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:01.448 [2024-02-14 19:15:38.827786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.448 [2024-02-14 19:15:38.827800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.825 19:15:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:02.825 19:15:40 -- common/autotest_common.sh@850 -- # return 0 00:15:02.825 19:15:40 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:02.825 19:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.825 19:15:40 -- common/autotest_common.sh@10 -- # set +x 00:15:02.825 [2024-02-14 19:15:40.149952] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:02.825 19:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.825 19:15:40 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:02.825 19:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.825 19:15:40 -- common/autotest_common.sh@10 -- # set +x 00:15:03.084 malloc0 00:15:03.084 19:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.084 19:15:40 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:03.084 19:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.084 19:15:40 -- common/autotest_common.sh@10 -- # set +x 00:15:03.084 [2024-02-14 19:15:40.277763] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:03.084 [2024-02-14 19:15:40.277908] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:03.084 [2024-02-14 19:15:40.277939] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:03.084 [2024-02-14 19:15:40.277966] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:03.084 [2024-02-14 19:15:40.291542] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:03.084 [2024-02-14 19:15:40.291593] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:03.084 [2024-02-14 19:15:40.298545] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:03.084 [2024-02-14 19:15:40.298724] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:03.084 [2024-02-14 19:15:40.320586] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:03.084 1 00:15:03.084 19:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.084 19:15:40 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:04.064 19:15:41 -- ublk/ublk_recovery.sh@31 -- # fio_proc=71086 00:15:04.064 19:15:41 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:04.064 19:15:41 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:04.064 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:04.064 fio-3.35 00:15:04.064 Starting 1 process 00:15:09.334 19:15:46 -- ublk/ublk_recovery.sh@36 -- # kill -9 71038 00:15:09.334 19:15:46 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:14.605 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71038 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:14.605 19:15:51 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71192 00:15:14.605 19:15:51 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:14.605 19:15:51 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:14.605 19:15:51 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71192 00:15:14.605 19:15:51 -- common/autotest_common.sh@817 -- # '[' -z 71192 ']' 00:15:14.605 19:15:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.605 19:15:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:14.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.605 19:15:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.605 19:15:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:14.605 19:15:51 -- common/autotest_common.sh@10 -- # set +x 00:15:14.605 [2024-02-14 19:15:51.455885] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:14.605 [2024-02-14 19:15:51.456049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71192 ] 00:15:14.605 [2024-02-14 19:15:51.630396] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:14.605 [2024-02-14 19:15:51.834355] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:14.605 [2024-02-14 19:15:51.834744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.605 [2024-02-14 19:15:51.834754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.982 19:15:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:15.982 19:15:53 -- common/autotest_common.sh@850 -- # return 0 00:15:15.982 19:15:53 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:15.982 19:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.982 19:15:53 -- common/autotest_common.sh@10 -- # set +x 00:15:15.982 [2024-02-14 19:15:53.097990] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:15.982 19:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.982 19:15:53 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:15.982 19:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.982 19:15:53 -- common/autotest_common.sh@10 -- # set +x 00:15:15.982 malloc0 00:15:15.982 19:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.982 19:15:53 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:15.982 19:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.982 19:15:53 -- common/autotest_common.sh@10 -- # set +x 00:15:15.982 [2024-02-14 19:15:53.222797] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:15.982 [2024-02-14 19:15:53.222870] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:15.982 [2024-02-14 19:15:53.222884] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:15.982 [2024-02-14 19:15:53.230674] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:15.982 [2024-02-14 19:15:53.230702] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:15.982 1 00:15:15.982 [2024-02-14 19:15:53.230818] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:15.982 19:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.982 19:15:53 -- ublk/ublk_recovery.sh@52 -- # wait 71086 00:15:42.525 [2024-02-14 19:16:16.563525] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:42.525 [2024-02-14 19:16:16.569528] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:42.525 [2024-02-14 19:16:16.578944] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:42.525 [2024-02-14 19:16:16.578997] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:04.465 00:16:04.465 fio_test: (groupid=0, jobs=1): err= 0: pid=71089: Wed Feb 14 19:16:41 2024 00:16:04.465 read: IOPS=10.8k, BW=42.0MiB/s (44.1MB/s)(2522MiB/60002msec) 00:16:04.465 slat (usec): min=2, max=223, avg= 6.18, stdev= 2.96 00:16:04.465 clat (usec): min=1198, max=30250k, avg=6030.85, stdev=308085.19 00:16:04.465 lat (usec): min=1212, max=30250k, avg=6037.04, stdev=308085.21 00:16:04.465 clat percentiles (msec): 00:16:04.465 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:16:04.465 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:16:04.465 | 70.00th=[ 3], 80.00th=[ 3], 90.00th=[ 4], 95.00th=[ 4], 00:16:04.465 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:16:04.465 | 99.99th=[17113] 00:16:04.465 bw ( KiB/s): min= 536, max=94288, per=100.00%, avg=84814.60, stdev=15038.73, samples=60 00:16:04.465 iops : min= 134, max=23572, avg=21203.65, stdev=3759.68, samples=60 00:16:04.465 write: IOPS=10.8k, BW=42.0MiB/s (44.0MB/s)(2521MiB/60002msec); 0 zone resets 00:16:04.465 slat (usec): min=2, max=1411, avg= 6.21, stdev= 3.69 00:16:04.465 clat (usec): min=1123, max=30250k, avg=5853.59, stdev=294065.41 00:16:04.465 lat (usec): min=1131, max=30250k, avg=5859.81, stdev=294065.41 00:16:04.465 clat percentiles (usec): 00:16:04.465 | 1.00th=[ 2409], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2671], 00:16:04.465 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2868], 60.00th=[ 2933], 00:16:04.465 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3294], 95.00th=[ 3785], 00:16:04.465 | 99.00th=[ 6063], 99.50th=[ 6718], 99.90th=[ 8586], 99.95th=[ 9634], 00:16:04.465 | 99.99th=[14091] 00:16:04.465 bw ( KiB/s): min= 488, max=94328, per=100.00%, avg=84766.23, stdev=15025.41, samples=60 00:16:04.465 iops : min= 122, max=23582, avg=21191.55, stdev=3756.35, samples=60 00:16:04.465 lat (msec) : 2=0.16%, 4=95.23%, 10=4.57%, 20=0.03%, >=2000=0.01% 00:16:04.465 cpu : usr=5.68%, sys=12.36%, ctx=37586, majf=0, minf=13 00:16:04.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:04.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.465 issued rwts: total=645695,645271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.465 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.465 00:16:04.465 Run status group 0 (all jobs): 00:16:04.465 READ: bw=42.0MiB/s (44.1MB/s), 42.0MiB/s-42.0MiB/s (44.1MB/s-44.1MB/s), io=2522MiB (2645MB), run=60002-60002msec 00:16:04.465 WRITE: bw=42.0MiB/s (44.0MB/s), 42.0MiB/s-42.0MiB/s (44.0MB/s-44.0MB/s), io=2521MiB (2643MB), run=60002-60002msec 00:16:04.465 00:16:04.465 Disk stats (read/write): 00:16:04.465 ublkb1: ios=643190/642844, merge=0/0, ticks=3827639/3641649, in_queue=7469289, util=99.94% 00:16:04.465 19:16:41 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:04.465 19:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.465 19:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:04.465 [2024-02-14 19:16:41.603434] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:04.465 [2024-02-14 19:16:41.640696] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:04.465 [2024-02-14 19:16:41.640968] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:04.465 [2024-02-14 19:16:41.646391] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:04.465 [2024-02-14 19:16:41.646521] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:04.465 [2024-02-14 19:16:41.646539] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:04.465 19:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.465 19:16:41 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:04.465 19:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.465 19:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:04.465 [2024-02-14 19:16:41.654709] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:04.465 [2024-02-14 19:16:41.662622] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:04.465 [2024-02-14 19:16:41.662666] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:04.465 19:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.465 19:16:41 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:04.465 19:16:41 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:04.465 19:16:41 -- ublk/ublk_recovery.sh@14 -- # killprocess 71192 00:16:04.465 19:16:41 -- common/autotest_common.sh@924 -- # '[' -z 71192 ']' 00:16:04.465 19:16:41 -- common/autotest_common.sh@928 -- # kill -0 71192 00:16:04.465 19:16:41 -- common/autotest_common.sh@929 -- # uname 00:16:04.465 19:16:41 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:04.465 19:16:41 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 71192 00:16:04.465 killing process with pid 71192 00:16:04.465 19:16:41 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:04.465 19:16:41 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:04.465 19:16:41 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 71192' 00:16:04.465 19:16:41 -- common/autotest_common.sh@943 -- # kill 71192 00:16:04.465 19:16:41 -- common/autotest_common.sh@948 -- # wait 71192 00:16:05.403 [2024-02-14 19:16:42.555326] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:05.403 [2024-02-14 19:16:42.555379] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:06.376 ************************************ 00:16:06.376 END TEST ublk_recovery 00:16:06.376 ************************************ 00:16:06.376 00:16:06.376 real 1m5.397s 00:16:06.376 user 1m52.065s 00:16:06.376 sys 0m19.106s 00:16:06.376 19:16:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:06.377 19:16:43 -- common/autotest_common.sh@10 -- # set +x 00:16:06.377 19:16:43 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:16:06.377 19:16:43 -- spdk/autotest.sh@268 -- # timing_exit lib 00:16:06.377 19:16:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:06.377 19:16:43 -- common/autotest_common.sh@10 -- # set +x 00:16:06.377 19:16:43 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:16:06.377 19:16:43 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:16:06.377 19:16:43 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:16:06.377 19:16:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:06.377 19:16:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:06.377 19:16:43 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:16:06.377 19:16:43 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:16:06.377 19:16:43 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:16:06.377 19:16:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:06.377 19:16:43 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:16:06.377 19:16:43 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:06.377 19:16:43 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:16:06.377 19:16:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:06.377 19:16:43 -- common/autotest_common.sh@10 -- # set +x 00:16:06.377 ************************************ 00:16:06.377 START TEST ftl 00:16:06.377 ************************************ 00:16:06.377 19:16:43 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:06.635 * Looking for test storage... 00:16:06.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:06.635 19:16:43 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:06.635 19:16:43 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:06.635 19:16:43 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:06.635 19:16:43 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:06.635 19:16:43 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:06.635 19:16:43 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:06.635 19:16:43 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.635 19:16:43 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:06.635 19:16:43 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:06.635 19:16:43 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.635 19:16:43 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.635 19:16:43 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:06.635 19:16:43 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:06.635 19:16:43 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:06.635 19:16:43 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:06.635 19:16:43 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:06.635 19:16:43 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:06.635 19:16:43 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.635 19:16:43 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.635 19:16:43 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:06.635 19:16:43 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:06.635 19:16:43 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:06.635 19:16:43 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:06.635 19:16:43 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:06.635 19:16:43 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:06.635 19:16:43 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:06.635 19:16:43 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:06.635 19:16:43 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:06.635 19:16:43 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:06.635 19:16:43 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.635 19:16:43 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:06.635 19:16:43 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:06.635 19:16:43 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:06.635 19:16:43 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:06.635 19:16:43 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:07.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:07.203 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:07.203 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:07.203 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:07.203 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:07.203 19:16:44 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=71983 00:16:07.203 19:16:44 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:07.203 19:16:44 -- ftl/ftl.sh@38 -- # waitforlisten 71983 00:16:07.203 19:16:44 -- common/autotest_common.sh@817 -- # '[' -z 71983 ']' 00:16:07.203 19:16:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.203 19:16:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:07.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.203 19:16:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.203 19:16:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:07.203 19:16:44 -- common/autotest_common.sh@10 -- # set +x 00:16:07.203 [2024-02-14 19:16:44.532916] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:07.203 [2024-02-14 19:16:44.533080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71983 ] 00:16:07.461 [2024-02-14 19:16:44.704659] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.721 [2024-02-14 19:16:44.927899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:07.721 [2024-02-14 19:16:44.928152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.980 19:16:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:07.980 19:16:45 -- common/autotest_common.sh@850 -- # return 0 00:16:07.980 19:16:45 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:08.239 19:16:45 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:09.176 19:16:46 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:09.176 19:16:46 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:09.744 19:16:46 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:09.744 19:16:46 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:09.744 19:16:46 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:10.003 19:16:47 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:16:10.003 19:16:47 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:10.003 19:16:47 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:16:10.003 19:16:47 -- ftl/ftl.sh@50 -- # break 00:16:10.003 19:16:47 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:16:10.004 19:16:47 -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:10.004 19:16:47 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:10.004 19:16:47 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:10.004 19:16:47 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:16:10.004 19:16:47 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:10.004 19:16:47 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:16:10.004 19:16:47 -- ftl/ftl.sh@63 -- # break 00:16:10.004 19:16:47 -- ftl/ftl.sh@66 -- # killprocess 71983 00:16:10.004 19:16:47 -- common/autotest_common.sh@924 -- # '[' -z 71983 ']' 00:16:10.004 19:16:47 -- common/autotest_common.sh@928 -- # kill -0 71983 00:16:10.004 19:16:47 -- common/autotest_common.sh@929 -- # uname 00:16:10.004 19:16:47 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:10.004 19:16:47 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 71983 00:16:10.004 19:16:47 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:10.004 19:16:47 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:10.004 killing process with pid 71983 00:16:10.004 19:16:47 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 71983' 00:16:10.004 19:16:47 -- common/autotest_common.sh@943 -- # kill 71983 00:16:10.004 19:16:47 -- common/autotest_common.sh@948 -- # wait 71983 00:16:11.915 19:16:49 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:16:11.915 19:16:49 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:16:11.915 19:16:49 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:16:11.915 19:16:49 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:16:11.915 19:16:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:11.915 19:16:49 -- common/autotest_common.sh@10 -- # set +x 00:16:11.915 ************************************ 00:16:11.915 START TEST ftl_fio_basic 00:16:11.915 ************************************ 00:16:11.915 19:16:49 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:16:11.915 * Looking for test storage... 00:16:11.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:11.915 19:16:49 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:11.915 19:16:49 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:11.915 19:16:49 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:11.915 19:16:49 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:11.915 19:16:49 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:11.915 19:16:49 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:11.915 19:16:49 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:11.915 19:16:49 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:11.915 19:16:49 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:11.915 19:16:49 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.915 19:16:49 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.915 19:16:49 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:11.915 19:16:49 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:11.915 19:16:49 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:11.915 19:16:49 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:11.915 19:16:49 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:11.915 19:16:49 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:11.915 19:16:49 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.915 19:16:49 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.915 19:16:49 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:11.915 19:16:49 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:11.915 19:16:49 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:11.915 19:16:49 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:11.915 19:16:49 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:11.915 19:16:49 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:11.915 19:16:49 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:11.915 19:16:49 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:11.915 19:16:49 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.915 19:16:49 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.915 19:16:49 -- ftl/fio.sh@11 -- # declare -A suite 00:16:11.915 19:16:49 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:11.915 19:16:49 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:11.915 19:16:49 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:11.915 19:16:49 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:11.915 19:16:49 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:16:11.915 19:16:49 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:16:11.915 19:16:49 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:11.915 19:16:49 -- ftl/fio.sh@26 -- # uuid= 00:16:11.915 19:16:49 -- ftl/fio.sh@27 -- # timeout=240 00:16:11.915 19:16:49 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:11.915 19:16:49 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:11.915 19:16:49 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:11.915 19:16:49 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:11.915 19:16:49 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:11.915 19:16:49 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:11.915 19:16:49 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:11.915 19:16:49 -- ftl/fio.sh@45 -- # svcpid=72117 00:16:11.915 19:16:49 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:11.915 19:16:49 -- ftl/fio.sh@46 -- # waitforlisten 72117 00:16:11.915 19:16:49 -- common/autotest_common.sh@817 -- # '[' -z 72117 ']' 00:16:11.915 19:16:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.915 19:16:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:11.915 19:16:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.915 19:16:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:11.915 19:16:49 -- common/autotest_common.sh@10 -- # set +x 00:16:12.174 [2024-02-14 19:16:49.428839] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:12.174 [2024-02-14 19:16:49.428987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72117 ] 00:16:12.433 [2024-02-14 19:16:49.598822] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:12.433 [2024-02-14 19:16:49.769270] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:12.433 [2024-02-14 19:16:49.769699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.433 [2024-02-14 19:16:49.769828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.433 [2024-02-14 19:16:49.769846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.806 19:16:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:13.806 19:16:51 -- common/autotest_common.sh@850 -- # return 0 00:16:13.806 19:16:51 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:13.806 19:16:51 -- ftl/common.sh@54 -- # local name=nvme0 00:16:13.806 19:16:51 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:13.806 19:16:51 -- ftl/common.sh@56 -- # local size=103424 00:16:13.806 19:16:51 -- ftl/common.sh@59 -- # local base_bdev 00:16:13.806 19:16:51 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:14.063 19:16:51 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:14.063 19:16:51 -- ftl/common.sh@62 -- # local base_size 00:16:14.063 19:16:51 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:14.063 19:16:51 -- common/autotest_common.sh@1355 -- # local bdev_name=nvme0n1 00:16:14.063 19:16:51 -- common/autotest_common.sh@1356 -- # local bdev_info 00:16:14.063 19:16:51 -- common/autotest_common.sh@1357 -- # local bs 00:16:14.063 19:16:51 -- common/autotest_common.sh@1358 -- # local nb 00:16:14.063 19:16:51 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:14.321 19:16:51 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:16:14.321 { 00:16:14.321 "name": "nvme0n1", 00:16:14.321 "aliases": [ 00:16:14.321 "d9ae4eda-0c35-420e-9833-6df7c219b5d5" 00:16:14.321 ], 00:16:14.321 "product_name": "NVMe disk", 00:16:14.321 "block_size": 4096, 00:16:14.321 "num_blocks": 1310720, 00:16:14.321 "uuid": "d9ae4eda-0c35-420e-9833-6df7c219b5d5", 00:16:14.321 "assigned_rate_limits": { 00:16:14.321 "rw_ios_per_sec": 0, 00:16:14.321 "rw_mbytes_per_sec": 0, 00:16:14.321 "r_mbytes_per_sec": 0, 00:16:14.321 "w_mbytes_per_sec": 0 00:16:14.321 }, 00:16:14.321 "claimed": false, 00:16:14.321 "zoned": false, 00:16:14.321 "supported_io_types": { 00:16:14.321 "read": true, 00:16:14.321 "write": true, 00:16:14.321 "unmap": true, 00:16:14.321 "write_zeroes": true, 00:16:14.321 "flush": true, 00:16:14.321 "reset": true, 00:16:14.321 "compare": true, 00:16:14.321 "compare_and_write": false, 00:16:14.321 "abort": true, 00:16:14.321 "nvme_admin": true, 00:16:14.321 "nvme_io": true 00:16:14.321 }, 00:16:14.321 "driver_specific": { 00:16:14.321 "nvme": [ 00:16:14.321 { 00:16:14.321 "pci_address": "0000:00:07.0", 00:16:14.321 "trid": { 00:16:14.321 "trtype": "PCIe", 00:16:14.321 "traddr": "0000:00:07.0" 00:16:14.321 }, 00:16:14.321 "ctrlr_data": { 00:16:14.321 "cntlid": 0, 00:16:14.321 "vendor_id": "0x1b36", 00:16:14.321 "model_number": "QEMU NVMe Ctrl", 00:16:14.321 "serial_number": "12341", 00:16:14.321 "firmware_revision": "8.0.0", 00:16:14.321 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:14.321 "oacs": { 00:16:14.321 "security": 0, 00:16:14.321 "format": 1, 00:16:14.321 "firmware": 0, 00:16:14.321 "ns_manage": 1 00:16:14.321 }, 00:16:14.321 "multi_ctrlr": false, 00:16:14.321 "ana_reporting": false 00:16:14.321 }, 00:16:14.321 "vs": { 00:16:14.321 "nvme_version": "1.4" 00:16:14.321 }, 00:16:14.321 "ns_data": { 00:16:14.321 "id": 1, 00:16:14.321 "can_share": false 00:16:14.321 } 00:16:14.321 } 00:16:14.321 ], 00:16:14.321 "mp_policy": "active_passive" 00:16:14.321 } 00:16:14.321 } 00:16:14.321 ]' 00:16:14.321 19:16:51 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:16:14.321 19:16:51 -- common/autotest_common.sh@1360 -- # bs=4096 00:16:14.321 19:16:51 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:16:14.321 19:16:51 -- common/autotest_common.sh@1361 -- # nb=1310720 00:16:14.321 19:16:51 -- common/autotest_common.sh@1364 -- # bdev_size=5120 00:16:14.321 19:16:51 -- common/autotest_common.sh@1365 -- # echo 5120 00:16:14.321 19:16:51 -- ftl/common.sh@63 -- # base_size=5120 00:16:14.321 19:16:51 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:14.321 19:16:51 -- ftl/common.sh@67 -- # clear_lvols 00:16:14.321 19:16:51 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:14.321 19:16:51 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:14.579 19:16:51 -- ftl/common.sh@28 -- # stores= 00:16:14.579 19:16:51 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:15.145 19:16:52 -- ftl/common.sh@68 -- # lvs=113acd72-8889-465b-a948-705fe3b05d49 00:16:15.145 19:16:52 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 113acd72-8889-465b-a948-705fe3b05d49 00:16:15.145 19:16:52 -- ftl/fio.sh@48 -- # split_bdev=01c28b0f-3279-436f-be29-23693a48236d 00:16:15.145 19:16:52 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 01c28b0f-3279-436f-be29-23693a48236d 00:16:15.145 19:16:52 -- ftl/common.sh@35 -- # local name=nvc0 00:16:15.145 19:16:52 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:15.145 19:16:52 -- ftl/common.sh@37 -- # local base_bdev=01c28b0f-3279-436f-be29-23693a48236d 00:16:15.145 19:16:52 -- ftl/common.sh@38 -- # local cache_size= 00:16:15.145 19:16:52 -- ftl/common.sh@41 -- # get_bdev_size 01c28b0f-3279-436f-be29-23693a48236d 00:16:15.145 19:16:52 -- common/autotest_common.sh@1355 -- # local bdev_name=01c28b0f-3279-436f-be29-23693a48236d 00:16:15.145 19:16:52 -- common/autotest_common.sh@1356 -- # local bdev_info 00:16:15.145 19:16:52 -- common/autotest_common.sh@1357 -- # local bs 00:16:15.145 19:16:52 -- common/autotest_common.sh@1358 -- # local nb 00:16:15.145 19:16:52 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 01c28b0f-3279-436f-be29-23693a48236d 00:16:15.403 19:16:52 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:16:15.403 { 00:16:15.403 "name": "01c28b0f-3279-436f-be29-23693a48236d", 00:16:15.403 "aliases": [ 00:16:15.403 "lvs/nvme0n1p0" 00:16:15.403 ], 00:16:15.403 "product_name": "Logical Volume", 00:16:15.403 "block_size": 4096, 00:16:15.403 "num_blocks": 26476544, 00:16:15.403 "uuid": "01c28b0f-3279-436f-be29-23693a48236d", 00:16:15.403 "assigned_rate_limits": { 00:16:15.403 "rw_ios_per_sec": 0, 00:16:15.403 "rw_mbytes_per_sec": 0, 00:16:15.403 "r_mbytes_per_sec": 0, 00:16:15.403 "w_mbytes_per_sec": 0 00:16:15.403 }, 00:16:15.403 "claimed": false, 00:16:15.403 "zoned": false, 00:16:15.403 "supported_io_types": { 00:16:15.403 "read": true, 00:16:15.403 "write": true, 00:16:15.403 "unmap": true, 00:16:15.403 "write_zeroes": true, 00:16:15.403 "flush": false, 00:16:15.403 "reset": true, 00:16:15.403 "compare": false, 00:16:15.403 "compare_and_write": false, 00:16:15.403 "abort": false, 00:16:15.403 "nvme_admin": false, 00:16:15.403 "nvme_io": false 00:16:15.403 }, 00:16:15.403 "driver_specific": { 00:16:15.403 "lvol": { 00:16:15.403 "lvol_store_uuid": "113acd72-8889-465b-a948-705fe3b05d49", 00:16:15.403 "base_bdev": "nvme0n1", 00:16:15.403 "thin_provision": true, 00:16:15.403 "snapshot": false, 00:16:15.403 "clone": false, 00:16:15.403 "esnap_clone": false 00:16:15.403 } 00:16:15.403 } 00:16:15.403 } 00:16:15.403 ]' 00:16:15.403 19:16:52 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:16:15.403 19:16:52 -- common/autotest_common.sh@1360 -- # bs=4096 00:16:15.403 19:16:52 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:16:15.661 19:16:52 -- common/autotest_common.sh@1361 -- # nb=26476544 00:16:15.661 19:16:52 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:16:15.661 19:16:52 -- common/autotest_common.sh@1365 -- # echo 103424 00:16:15.661 19:16:52 -- ftl/common.sh@41 -- # local base_size=5171 00:16:15.661 19:16:52 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:15.661 19:16:52 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:15.919 19:16:53 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:15.919 19:16:53 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:15.919 19:16:53 -- ftl/common.sh@48 -- # get_bdev_size 01c28b0f-3279-436f-be29-23693a48236d 00:16:15.919 19:16:53 -- common/autotest_common.sh@1355 -- # local bdev_name=01c28b0f-3279-436f-be29-23693a48236d 00:16:15.919 19:16:53 -- common/autotest_common.sh@1356 -- # local bdev_info 00:16:15.919 19:16:53 -- common/autotest_common.sh@1357 -- # local bs 00:16:15.919 19:16:53 -- common/autotest_common.sh@1358 -- # local nb 00:16:15.919 19:16:53 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 01c28b0f-3279-436f-be29-23693a48236d 00:16:16.177 19:16:53 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:16:16.177 { 00:16:16.177 "name": "01c28b0f-3279-436f-be29-23693a48236d", 00:16:16.177 "aliases": [ 00:16:16.177 "lvs/nvme0n1p0" 00:16:16.177 ], 00:16:16.177 "product_name": "Logical Volume", 00:16:16.177 "block_size": 4096, 00:16:16.177 "num_blocks": 26476544, 00:16:16.177 "uuid": "01c28b0f-3279-436f-be29-23693a48236d", 00:16:16.177 "assigned_rate_limits": { 00:16:16.177 "rw_ios_per_sec": 0, 00:16:16.177 "rw_mbytes_per_sec": 0, 00:16:16.177 "r_mbytes_per_sec": 0, 00:16:16.177 "w_mbytes_per_sec": 0 00:16:16.177 }, 00:16:16.177 "claimed": false, 00:16:16.177 "zoned": false, 00:16:16.177 "supported_io_types": { 00:16:16.177 "read": true, 00:16:16.177 "write": true, 00:16:16.177 "unmap": true, 00:16:16.177 "write_zeroes": true, 00:16:16.177 "flush": false, 00:16:16.177 "reset": true, 00:16:16.177 "compare": false, 00:16:16.177 "compare_and_write": false, 00:16:16.177 "abort": false, 00:16:16.177 "nvme_admin": false, 00:16:16.177 "nvme_io": false 00:16:16.177 }, 00:16:16.177 "driver_specific": { 00:16:16.177 "lvol": { 00:16:16.177 "lvol_store_uuid": "113acd72-8889-465b-a948-705fe3b05d49", 00:16:16.177 "base_bdev": "nvme0n1", 00:16:16.177 "thin_provision": true, 00:16:16.177 "snapshot": false, 00:16:16.177 "clone": false, 00:16:16.177 "esnap_clone": false 00:16:16.177 } 00:16:16.177 } 00:16:16.177 } 00:16:16.177 ]' 00:16:16.177 19:16:53 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:16:16.177 19:16:53 -- common/autotest_common.sh@1360 -- # bs=4096 00:16:16.177 19:16:53 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:16:16.177 19:16:53 -- common/autotest_common.sh@1361 -- # nb=26476544 00:16:16.177 19:16:53 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:16:16.177 19:16:53 -- common/autotest_common.sh@1365 -- # echo 103424 00:16:16.177 19:16:53 -- ftl/common.sh@48 -- # cache_size=5171 00:16:16.177 19:16:53 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:16.434 19:16:53 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:16.434 19:16:53 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:16.434 19:16:53 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:16.434 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:16.434 19:16:53 -- ftl/fio.sh@56 -- # get_bdev_size 01c28b0f-3279-436f-be29-23693a48236d 00:16:16.434 19:16:53 -- common/autotest_common.sh@1355 -- # local bdev_name=01c28b0f-3279-436f-be29-23693a48236d 00:16:16.434 19:16:53 -- common/autotest_common.sh@1356 -- # local bdev_info 00:16:16.434 19:16:53 -- common/autotest_common.sh@1357 -- # local bs 00:16:16.434 19:16:53 -- common/autotest_common.sh@1358 -- # local nb 00:16:16.434 19:16:53 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 01c28b0f-3279-436f-be29-23693a48236d 00:16:16.692 19:16:53 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:16:16.692 { 00:16:16.692 "name": "01c28b0f-3279-436f-be29-23693a48236d", 00:16:16.692 "aliases": [ 00:16:16.692 "lvs/nvme0n1p0" 00:16:16.692 ], 00:16:16.692 "product_name": "Logical Volume", 00:16:16.692 "block_size": 4096, 00:16:16.692 "num_blocks": 26476544, 00:16:16.692 "uuid": "01c28b0f-3279-436f-be29-23693a48236d", 00:16:16.692 "assigned_rate_limits": { 00:16:16.692 "rw_ios_per_sec": 0, 00:16:16.692 "rw_mbytes_per_sec": 0, 00:16:16.692 "r_mbytes_per_sec": 0, 00:16:16.692 "w_mbytes_per_sec": 0 00:16:16.692 }, 00:16:16.692 "claimed": false, 00:16:16.692 "zoned": false, 00:16:16.692 "supported_io_types": { 00:16:16.692 "read": true, 00:16:16.692 "write": true, 00:16:16.692 "unmap": true, 00:16:16.692 "write_zeroes": true, 00:16:16.692 "flush": false, 00:16:16.692 "reset": true, 00:16:16.692 "compare": false, 00:16:16.692 "compare_and_write": false, 00:16:16.692 "abort": false, 00:16:16.692 "nvme_admin": false, 00:16:16.692 "nvme_io": false 00:16:16.692 }, 00:16:16.692 "driver_specific": { 00:16:16.692 "lvol": { 00:16:16.692 "lvol_store_uuid": "113acd72-8889-465b-a948-705fe3b05d49", 00:16:16.692 "base_bdev": "nvme0n1", 00:16:16.692 "thin_provision": true, 00:16:16.692 "snapshot": false, 00:16:16.692 "clone": false, 00:16:16.692 "esnap_clone": false 00:16:16.692 } 00:16:16.692 } 00:16:16.692 } 00:16:16.692 ]' 00:16:16.692 19:16:53 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:16:16.692 19:16:54 -- common/autotest_common.sh@1360 -- # bs=4096 00:16:16.692 19:16:54 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:16:16.692 19:16:54 -- common/autotest_common.sh@1361 -- # nb=26476544 00:16:16.692 19:16:54 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:16:16.692 19:16:54 -- common/autotest_common.sh@1365 -- # echo 103424 00:16:16.692 19:16:54 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:16.692 19:16:54 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:16.692 19:16:54 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 01c28b0f-3279-436f-be29-23693a48236d -c nvc0n1p0 --l2p_dram_limit 60 00:16:16.951 [2024-02-14 19:16:54.253142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.951 [2024-02-14 19:16:54.253209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:16.951 [2024-02-14 19:16:54.253234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:16.951 [2024-02-14 19:16:54.253246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.951 [2024-02-14 19:16:54.253332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.951 [2024-02-14 19:16:54.253350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:16.951 [2024-02-14 19:16:54.253364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:16.951 [2024-02-14 19:16:54.253374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.951 [2024-02-14 19:16:54.253412] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:16.951 [2024-02-14 19:16:54.254565] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:16.951 [2024-02-14 19:16:54.254627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.951 [2024-02-14 19:16:54.254642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:16.951 [2024-02-14 19:16:54.254657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:16:16.951 [2024-02-14 19:16:54.254668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.951 [2024-02-14 19:16:54.254813] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 69bea7f9-4d69-422e-a689-47513e93f7fa 00:16:16.951 [2024-02-14 19:16:54.255924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.951 [2024-02-14 19:16:54.255978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:16.951 [2024-02-14 19:16:54.255994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:16.951 [2024-02-14 19:16:54.256008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.951 [2024-02-14 19:16:54.260351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.951 [2024-02-14 19:16:54.260417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:16.951 [2024-02-14 19:16:54.260433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.273 ms 00:16:16.951 [2024-02-14 19:16:54.260446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.951 [2024-02-14 19:16:54.260585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.951 [2024-02-14 19:16:54.260609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:16.951 [2024-02-14 19:16:54.260622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:16:16.951 [2024-02-14 19:16:54.260637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.951 [2024-02-14 19:16:54.260742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.951 [2024-02-14 19:16:54.260780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:16.951 [2024-02-14 19:16:54.260794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:16.951 [2024-02-14 19:16:54.260811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.951 [2024-02-14 19:16:54.260850] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:16.951 [2024-02-14 19:16:54.265117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.951 [2024-02-14 19:16:54.265169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:16.951 [2024-02-14 19:16:54.265187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.272 ms 00:16:16.951 [2024-02-14 19:16:54.265198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.951 [2024-02-14 19:16:54.265251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.951 [2024-02-14 19:16:54.265266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:16.951 [2024-02-14 19:16:54.265280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:16.951 [2024-02-14 19:16:54.265291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.951 [2024-02-14 19:16:54.265343] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:16.951 [2024-02-14 19:16:54.265502] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:16.951 [2024-02-14 19:16:54.265543] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:16.951 [2024-02-14 19:16:54.265562] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:16.951 [2024-02-14 19:16:54.265578] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:16.951 [2024-02-14 19:16:54.265618] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:16.951 [2024-02-14 19:16:54.265635] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:16.951 [2024-02-14 19:16:54.265646] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:16.951 [2024-02-14 19:16:54.265664] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:16.951 [2024-02-14 19:16:54.265675] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:16.951 [2024-02-14 19:16:54.265690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.951 [2024-02-14 19:16:54.265702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:16.951 [2024-02-14 19:16:54.265723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:16:16.951 [2024-02-14 19:16:54.265750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.951 [2024-02-14 19:16:54.265832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.951 [2024-02-14 19:16:54.265848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:16.951 [2024-02-14 19:16:54.265863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:16.951 [2024-02-14 19:16:54.265874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.951 [2024-02-14 19:16:54.265992] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:16.951 [2024-02-14 19:16:54.266008] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:16.951 [2024-02-14 19:16:54.266023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:16.951 [2024-02-14 19:16:54.266035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.951 [2024-02-14 19:16:54.266049] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:16.951 [2024-02-14 19:16:54.266060] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:16.951 [2024-02-14 19:16:54.266072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:16.951 [2024-02-14 19:16:54.266083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:16.951 [2024-02-14 19:16:54.266096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:16.951 [2024-02-14 19:16:54.266106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:16.951 [2024-02-14 19:16:54.266119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:16.952 [2024-02-14 19:16:54.266129] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:16.952 [2024-02-14 19:16:54.266142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:16.952 [2024-02-14 19:16:54.266153] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:16.952 [2024-02-14 19:16:54.266167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:16.952 [2024-02-14 19:16:54.266178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.952 [2024-02-14 19:16:54.266192] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:16.952 [2024-02-14 19:16:54.266204] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:16.952 [2024-02-14 19:16:54.266216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.952 [2024-02-14 19:16:54.266230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:16.952 [2024-02-14 19:16:54.266244] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:16.952 [2024-02-14 19:16:54.266256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:16.952 [2024-02-14 19:16:54.266268] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:16.952 [2024-02-14 19:16:54.266279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:16.952 [2024-02-14 19:16:54.266291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:16.952 [2024-02-14 19:16:54.266302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:16.952 [2024-02-14 19:16:54.266314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:16.952 [2024-02-14 19:16:54.266325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:16.952 [2024-02-14 19:16:54.266337] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:16.952 [2024-02-14 19:16:54.266348] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:16.952 [2024-02-14 19:16:54.266360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:16.952 [2024-02-14 19:16:54.266370] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:16.952 [2024-02-14 19:16:54.266385] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:16.952 [2024-02-14 19:16:54.266395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:16.952 [2024-02-14 19:16:54.266407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:16.952 [2024-02-14 19:16:54.266418] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:16.952 [2024-02-14 19:16:54.266449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:16.952 [2024-02-14 19:16:54.266460] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:16.952 [2024-02-14 19:16:54.266475] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:16.952 [2024-02-14 19:16:54.266511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:16.952 [2024-02-14 19:16:54.266526] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:16.952 [2024-02-14 19:16:54.266538] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:16.952 [2024-02-14 19:16:54.266552] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:16.952 [2024-02-14 19:16:54.266564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.952 [2024-02-14 19:16:54.266578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:16.952 [2024-02-14 19:16:54.266589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:16.952 [2024-02-14 19:16:54.266601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:16.952 [2024-02-14 19:16:54.266613] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:16.952 [2024-02-14 19:16:54.266628] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:16.952 [2024-02-14 19:16:54.266639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:16.952 [2024-02-14 19:16:54.266655] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:16.952 [2024-02-14 19:16:54.266672] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:16.952 [2024-02-14 19:16:54.266693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:16.952 [2024-02-14 19:16:54.266706] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:16.952 [2024-02-14 19:16:54.266719] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:16.952 [2024-02-14 19:16:54.266731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:16.952 [2024-02-14 19:16:54.266744] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:16.952 [2024-02-14 19:16:54.266756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:16.952 [2024-02-14 19:16:54.266770] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:16.952 [2024-02-14 19:16:54.266781] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:16.952 [2024-02-14 19:16:54.266797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:16.952 [2024-02-14 19:16:54.266809] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:16.952 [2024-02-14 19:16:54.266823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:16.952 [2024-02-14 19:16:54.266834] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:16.952 [2024-02-14 19:16:54.266850] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:16.952 [2024-02-14 19:16:54.266862] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:16.952 [2024-02-14 19:16:54.266877] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:16.952 [2024-02-14 19:16:54.266889] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:16.952 [2024-02-14 19:16:54.266903] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:16.952 [2024-02-14 19:16:54.266915] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:16.952 [2024-02-14 19:16:54.266928] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:16.952 [2024-02-14 19:16:54.266942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.952 [2024-02-14 19:16:54.266972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:16.952 [2024-02-14 19:16:54.266984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:16:16.952 [2024-02-14 19:16:54.266997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.952 [2024-02-14 19:16:54.283706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.952 [2024-02-14 19:16:54.283773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:16.952 [2024-02-14 19:16:54.283791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.633 ms 00:16:16.952 [2024-02-14 19:16:54.283804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.952 [2024-02-14 19:16:54.283905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.952 [2024-02-14 19:16:54.283924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:16.952 [2024-02-14 19:16:54.283937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:16.952 [2024-02-14 19:16:54.283948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.952 [2024-02-14 19:16:54.322880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.952 [2024-02-14 19:16:54.322970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:16.952 [2024-02-14 19:16:54.322991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.785 ms 00:16:16.952 [2024-02-14 19:16:54.323005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.952 [2024-02-14 19:16:54.323066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.952 [2024-02-14 19:16:54.323085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:16.952 [2024-02-14 19:16:54.323100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:16.952 [2024-02-14 19:16:54.323114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.952 [2024-02-14 19:16:54.323547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.952 [2024-02-14 19:16:54.323590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:16.952 [2024-02-14 19:16:54.323607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:16:16.952 [2024-02-14 19:16:54.323621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.952 [2024-02-14 19:16:54.323778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.952 [2024-02-14 19:16:54.323803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:16.952 [2024-02-14 19:16:54.323817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:16:16.952 [2024-02-14 19:16:54.323831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.952 [2024-02-14 19:16:54.354150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.952 [2024-02-14 19:16:54.354221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:16.952 [2024-02-14 19:16:54.354241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.283 ms 00:16:16.952 [2024-02-14 19:16:54.354256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.952 [2024-02-14 19:16:54.367914] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:17.210 [2024-02-14 19:16:54.382020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.210 [2024-02-14 19:16:54.382097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:17.210 [2024-02-14 19:16:54.382120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.569 ms 00:16:17.210 [2024-02-14 19:16:54.382133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.210 [2024-02-14 19:16:54.480794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.210 [2024-02-14 19:16:54.480876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:17.210 [2024-02-14 19:16:54.480899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.592 ms 00:16:17.210 [2024-02-14 19:16:54.480911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.210 [2024-02-14 19:16:54.480980] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:17.210 [2024-02-14 19:16:54.481001] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:21.415 [2024-02-14 19:16:58.340182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.415 [2024-02-14 19:16:58.340260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:21.415 [2024-02-14 19:16:58.340300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3859.232 ms 00:16:21.415 [2024-02-14 19:16:58.340313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.415 [2024-02-14 19:16:58.340561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.415 [2024-02-14 19:16:58.340581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:21.415 [2024-02-14 19:16:58.340597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:16:21.415 [2024-02-14 19:16:58.340609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.415 [2024-02-14 19:16:58.369541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.415 [2024-02-14 19:16:58.369598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:21.415 [2024-02-14 19:16:58.369636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.856 ms 00:16:21.415 [2024-02-14 19:16:58.369649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.415 [2024-02-14 19:16:58.397547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.415 [2024-02-14 19:16:58.397650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:21.415 [2024-02-14 19:16:58.397676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.840 ms 00:16:21.415 [2024-02-14 19:16:58.397688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.415 [2024-02-14 19:16:58.398095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.415 [2024-02-14 19:16:58.398118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:21.415 [2024-02-14 19:16:58.398134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:16:21.415 [2024-02-14 19:16:58.398144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.415 [2024-02-14 19:16:58.481583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.415 [2024-02-14 19:16:58.481649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:21.415 [2024-02-14 19:16:58.481670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.371 ms 00:16:21.415 [2024-02-14 19:16:58.481682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.415 [2024-02-14 19:16:58.512330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.415 [2024-02-14 19:16:58.512384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:21.415 [2024-02-14 19:16:58.512423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.592 ms 00:16:21.415 [2024-02-14 19:16:58.512436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.415 [2024-02-14 19:16:58.516287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.415 [2024-02-14 19:16:58.516336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:21.415 [2024-02-14 19:16:58.516356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.782 ms 00:16:21.415 [2024-02-14 19:16:58.516367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.415 [2024-02-14 19:16:58.546290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.415 [2024-02-14 19:16:58.546342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:21.415 [2024-02-14 19:16:58.546360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.856 ms 00:16:21.415 [2024-02-14 19:16:58.546372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.415 [2024-02-14 19:16:58.546448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.415 [2024-02-14 19:16:58.546468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:21.415 [2024-02-14 19:16:58.546483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:21.415 [2024-02-14 19:16:58.546508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.415 [2024-02-14 19:16:58.546687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.415 [2024-02-14 19:16:58.546708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:21.415 [2024-02-14 19:16:58.546728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:21.415 [2024-02-14 19:16:58.546740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.415 [2024-02-14 19:16:58.548033] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4294.319 ms, result 0 00:16:21.415 { 00:16:21.415 "name": "ftl0", 00:16:21.415 "uuid": "69bea7f9-4d69-422e-a689-47513e93f7fa" 00:16:21.415 } 00:16:21.415 19:16:58 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:21.415 19:16:58 -- common/autotest_common.sh@885 -- # local bdev_name=ftl0 00:16:21.415 19:16:58 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:21.415 19:16:58 -- common/autotest_common.sh@887 -- # local i 00:16:21.415 19:16:58 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:21.415 19:16:58 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:21.415 19:16:58 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:21.674 19:16:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:21.674 [ 00:16:21.674 { 00:16:21.674 "name": "ftl0", 00:16:21.674 "aliases": [ 00:16:21.674 "69bea7f9-4d69-422e-a689-47513e93f7fa" 00:16:21.674 ], 00:16:21.674 "product_name": "FTL disk", 00:16:21.674 "block_size": 4096, 00:16:21.674 "num_blocks": 20971520, 00:16:21.674 "uuid": "69bea7f9-4d69-422e-a689-47513e93f7fa", 00:16:21.674 "assigned_rate_limits": { 00:16:21.674 "rw_ios_per_sec": 0, 00:16:21.674 "rw_mbytes_per_sec": 0, 00:16:21.674 "r_mbytes_per_sec": 0, 00:16:21.674 "w_mbytes_per_sec": 0 00:16:21.674 }, 00:16:21.674 "claimed": false, 00:16:21.674 "zoned": false, 00:16:21.674 "supported_io_types": { 00:16:21.674 "read": true, 00:16:21.674 "write": true, 00:16:21.674 "unmap": true, 00:16:21.674 "write_zeroes": true, 00:16:21.674 "flush": true, 00:16:21.674 "reset": false, 00:16:21.674 "compare": false, 00:16:21.674 "compare_and_write": false, 00:16:21.674 "abort": false, 00:16:21.674 "nvme_admin": false, 00:16:21.674 "nvme_io": false 00:16:21.674 }, 00:16:21.674 "driver_specific": { 00:16:21.674 "ftl": { 00:16:21.674 "base_bdev": "01c28b0f-3279-436f-be29-23693a48236d", 00:16:21.674 "cache": "nvc0n1p0" 00:16:21.674 } 00:16:21.674 } 00:16:21.674 } 00:16:21.674 ] 00:16:21.674 19:16:59 -- common/autotest_common.sh@893 -- # return 0 00:16:21.674 19:16:59 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:21.674 19:16:59 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:21.934 19:16:59 -- ftl/fio.sh@70 -- # echo ']}' 00:16:21.934 19:16:59 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:22.194 [2024-02-14 19:16:59.520824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.194 [2024-02-14 19:16:59.520909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:22.194 [2024-02-14 19:16:59.520943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:22.194 [2024-02-14 19:16:59.520968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.194 [2024-02-14 19:16:59.521012] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:22.194 [2024-02-14 19:16:59.524164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.194 [2024-02-14 19:16:59.524208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:22.194 [2024-02-14 19:16:59.524225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.125 ms 00:16:22.194 [2024-02-14 19:16:59.524238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.194 [2024-02-14 19:16:59.524716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.194 [2024-02-14 19:16:59.524741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:22.194 [2024-02-14 19:16:59.524757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:16:22.194 [2024-02-14 19:16:59.524767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.194 [2024-02-14 19:16:59.528027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.194 [2024-02-14 19:16:59.528051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:22.194 [2024-02-14 19:16:59.528083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.226 ms 00:16:22.194 [2024-02-14 19:16:59.528094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.194 [2024-02-14 19:16:59.534510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.194 [2024-02-14 19:16:59.534559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:22.194 [2024-02-14 19:16:59.534575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.377 ms 00:16:22.194 [2024-02-14 19:16:59.534589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.194 [2024-02-14 19:16:59.564370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.194 [2024-02-14 19:16:59.564422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:22.194 [2024-02-14 19:16:59.564440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.674 ms 00:16:22.194 [2024-02-14 19:16:59.564451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.194 [2024-02-14 19:16:59.583593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.194 [2024-02-14 19:16:59.583647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:22.194 [2024-02-14 19:16:59.583668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.079 ms 00:16:22.194 [2024-02-14 19:16:59.583680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.194 [2024-02-14 19:16:59.583955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.194 [2024-02-14 19:16:59.583999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:22.194 [2024-02-14 19:16:59.584020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:16:22.194 [2024-02-14 19:16:59.584032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.454 [2024-02-14 19:16:59.616422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.454 [2024-02-14 19:16:59.616473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:22.454 [2024-02-14 19:16:59.616492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.351 ms 00:16:22.454 [2024-02-14 19:16:59.616513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.454 [2024-02-14 19:16:59.646917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.454 [2024-02-14 19:16:59.646956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:22.454 [2024-02-14 19:16:59.646976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.348 ms 00:16:22.454 [2024-02-14 19:16:59.646988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.454 [2024-02-14 19:16:59.676324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.454 [2024-02-14 19:16:59.676374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:22.454 [2024-02-14 19:16:59.676392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.275 ms 00:16:22.454 [2024-02-14 19:16:59.676403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.454 [2024-02-14 19:16:59.705682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.454 [2024-02-14 19:16:59.705718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:22.454 [2024-02-14 19:16:59.705736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.110 ms 00:16:22.454 [2024-02-14 19:16:59.705747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.454 [2024-02-14 19:16:59.705803] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:22.454 [2024-02-14 19:16:59.705825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.705841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.705853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.705866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.705878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.705890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.705902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.705915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.705927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.705940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.705967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.705982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.705994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:22.454 [2024-02-14 19:16:59.706901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.706917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.706929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.706943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.706956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.706970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.706982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.706995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:22.455 [2024-02-14 19:16:59.707231] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:22.455 [2024-02-14 19:16:59.707244] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 69bea7f9-4d69-422e-a689-47513e93f7fa 00:16:22.455 [2024-02-14 19:16:59.707257] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:22.455 [2024-02-14 19:16:59.707270] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:22.455 [2024-02-14 19:16:59.707281] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:22.455 [2024-02-14 19:16:59.707294] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:22.455 [2024-02-14 19:16:59.707305] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:22.455 [2024-02-14 19:16:59.707319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:22.455 [2024-02-14 19:16:59.707330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:22.455 [2024-02-14 19:16:59.707342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:22.455 [2024-02-14 19:16:59.707353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:22.455 [2024-02-14 19:16:59.707368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.455 [2024-02-14 19:16:59.707383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:22.455 [2024-02-14 19:16:59.707397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.570 ms 00:16:22.455 [2024-02-14 19:16:59.707409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.455 [2024-02-14 19:16:59.723540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.455 [2024-02-14 19:16:59.723574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:22.455 [2024-02-14 19:16:59.723593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.060 ms 00:16:22.455 [2024-02-14 19:16:59.723605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.455 [2024-02-14 19:16:59.723846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.455 [2024-02-14 19:16:59.723868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:22.455 [2024-02-14 19:16:59.723883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:16:22.455 [2024-02-14 19:16:59.723894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.455 [2024-02-14 19:16:59.777105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.455 [2024-02-14 19:16:59.777165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:22.455 [2024-02-14 19:16:59.777184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.455 [2024-02-14 19:16:59.777195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.455 [2024-02-14 19:16:59.777275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.455 [2024-02-14 19:16:59.777289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:22.455 [2024-02-14 19:16:59.777303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.455 [2024-02-14 19:16:59.777313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.455 [2024-02-14 19:16:59.777477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.455 [2024-02-14 19:16:59.777516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:22.455 [2024-02-14 19:16:59.777550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.455 [2024-02-14 19:16:59.777563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.455 [2024-02-14 19:16:59.777609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.455 [2024-02-14 19:16:59.777627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:22.455 [2024-02-14 19:16:59.777641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.455 [2024-02-14 19:16:59.777653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.714 [2024-02-14 19:16:59.886442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.714 [2024-02-14 19:16:59.886542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:22.714 [2024-02-14 19:16:59.886564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.714 [2024-02-14 19:16:59.886576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.714 [2024-02-14 19:16:59.923127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.714 [2024-02-14 19:16:59.923179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:22.714 [2024-02-14 19:16:59.923201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.714 [2024-02-14 19:16:59.923214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.714 [2024-02-14 19:16:59.923328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.714 [2024-02-14 19:16:59.923347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:22.714 [2024-02-14 19:16:59.923362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.714 [2024-02-14 19:16:59.923374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.714 [2024-02-14 19:16:59.923457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.714 [2024-02-14 19:16:59.923475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:22.714 [2024-02-14 19:16:59.923519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.714 [2024-02-14 19:16:59.923533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.714 [2024-02-14 19:16:59.923683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.714 [2024-02-14 19:16:59.923708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:22.714 [2024-02-14 19:16:59.923724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.714 [2024-02-14 19:16:59.923736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.714 [2024-02-14 19:16:59.923819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.714 [2024-02-14 19:16:59.923838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:22.714 [2024-02-14 19:16:59.923854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.714 [2024-02-14 19:16:59.923868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.714 [2024-02-14 19:16:59.923927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.714 [2024-02-14 19:16:59.923944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:22.714 [2024-02-14 19:16:59.923959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.714 [2024-02-14 19:16:59.923970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.714 [2024-02-14 19:16:59.924039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.714 [2024-02-14 19:16:59.924058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:22.714 [2024-02-14 19:16:59.924076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.714 [2024-02-14 19:16:59.924088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.714 [2024-02-14 19:16:59.924278] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 403.417 ms, result 0 00:16:22.714 true 00:16:22.714 19:16:59 -- ftl/fio.sh@75 -- # killprocess 72117 00:16:22.714 19:16:59 -- common/autotest_common.sh@924 -- # '[' -z 72117 ']' 00:16:22.714 19:16:59 -- common/autotest_common.sh@928 -- # kill -0 72117 00:16:22.714 19:16:59 -- common/autotest_common.sh@929 -- # uname 00:16:22.714 19:16:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:22.714 19:16:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 72117 00:16:22.714 19:16:59 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:22.714 19:16:59 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:22.714 killing process with pid 72117 00:16:22.714 19:16:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 72117' 00:16:22.714 19:16:59 -- common/autotest_common.sh@943 -- # kill 72117 00:16:22.714 19:16:59 -- common/autotest_common.sh@948 -- # wait 72117 00:16:26.904 19:17:04 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:26.904 19:17:04 -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:26.904 19:17:04 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:26.904 19:17:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:26.904 19:17:04 -- common/autotest_common.sh@10 -- # set +x 00:16:27.164 19:17:04 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:27.164 19:17:04 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:27.164 19:17:04 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:16:27.164 19:17:04 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:27.164 19:17:04 -- common/autotest_common.sh@1316 -- # local sanitizers 00:16:27.164 19:17:04 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:27.164 19:17:04 -- common/autotest_common.sh@1318 -- # shift 00:16:27.164 19:17:04 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:16:27.164 19:17:04 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:16:27.164 19:17:04 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:27.164 19:17:04 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:16:27.164 19:17:04 -- common/autotest_common.sh@1322 -- # grep libasan 00:16:27.164 19:17:04 -- common/autotest_common.sh@1322 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:27.164 19:17:04 -- common/autotest_common.sh@1323 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:27.164 19:17:04 -- common/autotest_common.sh@1324 -- # break 00:16:27.164 19:17:04 -- common/autotest_common.sh@1329 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:27.164 19:17:04 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:27.164 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:27.164 fio-3.35 00:16:27.164 Starting 1 thread 00:16:32.475 00:16:32.475 test: (groupid=0, jobs=1): err= 0: pid=72358: Wed Feb 14 19:17:09 2024 00:16:32.475 read: IOPS=935, BW=62.1MiB/s (65.1MB/s)(255MiB/4098msec) 00:16:32.475 slat (nsec): min=5029, max=72993, avg=6904.32, stdev=3236.73 00:16:32.475 clat (usec): min=334, max=756, avg=476.58, stdev=51.65 00:16:32.475 lat (usec): min=347, max=770, avg=483.48, stdev=52.45 00:16:32.475 clat percentiles (usec): 00:16:32.475 | 1.00th=[ 375], 5.00th=[ 412], 10.00th=[ 424], 20.00th=[ 437], 00:16:32.475 | 30.00th=[ 449], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 478], 00:16:32.475 | 70.00th=[ 494], 80.00th=[ 515], 90.00th=[ 545], 95.00th=[ 570], 00:16:32.475 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[ 734], 99.95th=[ 750], 00:16:32.475 | 99.99th=[ 758] 00:16:32.475 write: IOPS=942, BW=62.6MiB/s (65.6MB/s)(256MiB/4093msec); 0 zone resets 00:16:32.475 slat (nsec): min=18000, max=88731, avg=23564.13, stdev=5878.35 00:16:32.475 clat (usec): min=390, max=1060, avg=544.94, stdev=63.00 00:16:32.475 lat (usec): min=410, max=1090, avg=568.51, stdev=63.57 00:16:32.475 clat percentiles (usec): 00:16:32.475 | 1.00th=[ 437], 5.00th=[ 461], 10.00th=[ 478], 20.00th=[ 502], 00:16:32.475 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 545], 00:16:32.475 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 644], 00:16:32.475 | 99.00th=[ 799], 99.50th=[ 857], 99.90th=[ 922], 99.95th=[ 1004], 00:16:32.475 | 99.99th=[ 1057] 00:16:32.475 bw ( KiB/s): min=62016, max=65416, per=100.00%, avg=64107.00, stdev=1128.38, samples=8 00:16:32.475 iops : min= 912, max= 962, avg=942.75, stdev=16.59, samples=8 00:16:32.475 lat (usec) : 500=46.81%, 750=52.46%, 1000=0.70% 00:16:32.475 lat (msec) : 2=0.03% 00:16:32.475 cpu : usr=99.19%, sys=0.20%, ctx=4, majf=0, minf=1318 00:16:32.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.475 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.475 00:16:32.475 Run status group 0 (all jobs): 00:16:32.475 READ: bw=62.1MiB/s (65.1MB/s), 62.1MiB/s-62.1MiB/s (65.1MB/s-65.1MB/s), io=255MiB (267MB), run=4098-4098msec 00:16:32.475 WRITE: bw=62.6MiB/s (65.6MB/s), 62.6MiB/s-62.6MiB/s (65.6MB/s-65.6MB/s), io=256MiB (269MB), run=4093-4093msec 00:16:33.851 ----------------------------------------------------- 00:16:33.851 Suppressions used: 00:16:33.851 count bytes template 00:16:33.851 1 5 /usr/src/fio/parse.c 00:16:33.851 1 8 libtcmalloc_minimal.so 00:16:33.851 1 904 libcrypto.so 00:16:33.851 ----------------------------------------------------- 00:16:33.851 00:16:34.110 19:17:11 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:34.110 19:17:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:34.110 19:17:11 -- common/autotest_common.sh@10 -- # set +x 00:16:34.110 19:17:11 -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:34.110 19:17:11 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:34.110 19:17:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:34.110 19:17:11 -- common/autotest_common.sh@10 -- # set +x 00:16:34.110 19:17:11 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:34.110 19:17:11 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:34.110 19:17:11 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:16:34.110 19:17:11 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:34.110 19:17:11 -- common/autotest_common.sh@1316 -- # local sanitizers 00:16:34.110 19:17:11 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:34.110 19:17:11 -- common/autotest_common.sh@1318 -- # shift 00:16:34.110 19:17:11 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:16:34.110 19:17:11 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:16:34.110 19:17:11 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:34.110 19:17:11 -- common/autotest_common.sh@1322 -- # grep libasan 00:16:34.110 19:17:11 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:16:34.110 19:17:11 -- common/autotest_common.sh@1322 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:34.110 19:17:11 -- common/autotest_common.sh@1323 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:34.110 19:17:11 -- common/autotest_common.sh@1324 -- # break 00:16:34.110 19:17:11 -- common/autotest_common.sh@1329 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:34.110 19:17:11 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:34.369 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:34.369 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:34.369 fio-3.35 00:16:34.369 Starting 2 threads 00:17:06.441 00:17:06.441 first_half: (groupid=0, jobs=1): err= 0: pid=72462: Wed Feb 14 19:17:41 2024 00:17:06.441 read: IOPS=2297, BW=9190KiB/s (9411kB/s)(255MiB/28377msec) 00:17:06.441 slat (nsec): min=4305, max=46662, avg=7066.27, stdev=1941.00 00:17:06.441 clat (usec): min=880, max=375674, avg=40868.05, stdev=18914.92 00:17:06.441 lat (usec): min=886, max=375679, avg=40875.11, stdev=18915.04 00:17:06.441 clat percentiles (msec): 00:17:06.441 | 1.00th=[ 5], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 38], 00:17:06.441 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:17:06.441 | 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 44], 95.00th=[ 51], 00:17:06.441 | 99.00th=[ 142], 99.50th=[ 178], 99.90th=[ 268], 99.95th=[ 317], 00:17:06.441 | 99.99th=[ 368] 00:17:06.441 write: IOPS=3154, BW=12.3MiB/s (12.9MB/s)(256MiB/20778msec); 0 zone resets 00:17:06.441 slat (usec): min=4, max=438, avg= 9.09, stdev= 5.03 00:17:06.441 clat (usec): min=456, max=109641, avg=14742.23, stdev=26242.67 00:17:06.441 lat (usec): min=472, max=109648, avg=14751.31, stdev=26242.80 00:17:06.441 clat percentiles (usec): 00:17:06.441 | 1.00th=[ 971], 5.00th=[ 1237], 10.00th=[ 1418], 20.00th=[ 1696], 00:17:06.441 | 30.00th=[ 1958], 40.00th=[ 2409], 50.00th=[ 4817], 60.00th=[ 6390], 00:17:06.441 | 70.00th=[ 8029], 80.00th=[ 14091], 90.00th=[ 76022], 95.00th=[ 85459], 00:17:06.441 | 99.00th=[ 98042], 99.50th=[102237], 99.90th=[105382], 99.95th=[106431], 00:17:06.441 | 99.99th=[107480] 00:17:06.441 bw ( KiB/s): min= 824, max=40552, per=99.71%, avg=22795.13, stdev=11476.84, samples=23 00:17:06.441 iops : min= 206, max=10138, avg=5698.78, stdev=2869.21, samples=23 00:17:06.441 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.60% 00:17:06.441 lat (msec) : 2=15.35%, 4=8.37%, 10=13.10%, 20=7.52%, 50=46.57% 00:17:06.441 lat (msec) : 100=7.22%, 250=1.18%, 500=0.06% 00:17:06.441 cpu : usr=99.24%, sys=0.23%, ctx=36, majf=0, minf=5576 00:17:06.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:06.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.441 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.441 issued rwts: total=65196,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.441 second_half: (groupid=0, jobs=1): err= 0: pid=72463: Wed Feb 14 19:17:41 2024 00:17:06.441 read: IOPS=2282, BW=9131KiB/s (9350kB/s)(255MiB/28561msec) 00:17:06.441 slat (nsec): min=4423, max=37262, avg=7081.49, stdev=1905.89 00:17:06.441 clat (usec): min=897, max=384910, avg=39795.17, stdev=17393.34 00:17:06.441 lat (usec): min=903, max=384919, avg=39802.25, stdev=17393.49 00:17:06.441 clat percentiles (msec): 00:17:06.441 | 1.00th=[ 7], 5.00th=[ 30], 10.00th=[ 37], 20.00th=[ 38], 00:17:06.441 | 30.00th=[ 38], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 39], 00:17:06.441 | 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 44], 95.00th=[ 48], 00:17:06.441 | 99.00th=[ 136], 99.50th=[ 163], 99.90th=[ 194], 99.95th=[ 236], 00:17:06.441 | 99.99th=[ 380] 00:17:06.441 write: IOPS=2857, BW=11.2MiB/s (11.7MB/s)(256MiB/22934msec); 0 zone resets 00:17:06.441 slat (usec): min=5, max=1093, avg= 9.38, stdev= 6.92 00:17:06.441 clat (usec): min=492, max=109458, avg=16167.40, stdev=26717.16 00:17:06.441 lat (usec): min=500, max=109465, avg=16176.78, stdev=26717.34 00:17:06.441 clat percentiles (usec): 00:17:06.441 | 1.00th=[ 922], 5.00th=[ 1188], 10.00th=[ 1352], 20.00th=[ 1614], 00:17:06.441 | 30.00th=[ 1876], 40.00th=[ 2409], 50.00th=[ 4817], 60.00th=[ 7046], 00:17:06.441 | 70.00th=[ 12125], 80.00th=[ 15533], 90.00th=[ 76022], 95.00th=[ 85459], 00:17:06.441 | 99.00th=[ 98042], 99.50th=[102237], 99.90th=[105382], 99.95th=[107480], 00:17:06.441 | 99.99th=[108528] 00:17:06.441 bw ( KiB/s): min= 1112, max=41792, per=91.75%, avg=20974.16, stdev=10411.25, samples=25 00:17:06.441 iops : min= 278, max=10448, avg=5243.52, stdev=2602.79, samples=25 00:17:06.441 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.86% 00:17:06.441 lat (msec) : 2=16.02%, 4=7.11%, 10=10.51%, 20=9.82%, 50=47.33% 00:17:06.441 lat (msec) : 100=6.96%, 250=1.32%, 500=0.02% 00:17:06.441 cpu : usr=99.14%, sys=0.31%, ctx=46, majf=0, minf=5527 00:17:06.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:06.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.441 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.441 issued rwts: total=65199,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.441 00:17:06.441 Run status group 0 (all jobs): 00:17:06.441 READ: bw=17.8MiB/s (18.7MB/s), 9131KiB/s-9190KiB/s (9350kB/s-9411kB/s), io=509MiB (534MB), run=28377-28561msec 00:17:06.441 WRITE: bw=22.3MiB/s (23.4MB/s), 11.2MiB/s-12.3MiB/s (11.7MB/s-12.9MB/s), io=512MiB (537MB), run=20778-22934msec 00:17:06.441 ----------------------------------------------------- 00:17:06.441 Suppressions used: 00:17:06.441 count bytes template 00:17:06.441 2 10 /usr/src/fio/parse.c 00:17:06.441 1 96 /usr/src/fio/iolog.c 00:17:06.441 1 8 libtcmalloc_minimal.so 00:17:06.441 1 904 libcrypto.so 00:17:06.441 ----------------------------------------------------- 00:17:06.441 00:17:06.441 19:17:43 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:06.441 19:17:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:06.441 19:17:43 -- common/autotest_common.sh@10 -- # set +x 00:17:06.441 19:17:43 -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:06.441 19:17:43 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:06.441 19:17:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:06.441 19:17:43 -- common/autotest_common.sh@10 -- # set +x 00:17:06.441 19:17:43 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:06.441 19:17:43 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:06.441 19:17:43 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:17:06.441 19:17:43 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:06.441 19:17:43 -- common/autotest_common.sh@1316 -- # local sanitizers 00:17:06.441 19:17:43 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:06.441 19:17:43 -- common/autotest_common.sh@1318 -- # shift 00:17:06.441 19:17:43 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:17:06.441 19:17:43 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:17:06.441 19:17:43 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:06.441 19:17:43 -- common/autotest_common.sh@1322 -- # grep libasan 00:17:06.441 19:17:43 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:17:06.441 19:17:43 -- common/autotest_common.sh@1322 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:06.441 19:17:43 -- common/autotest_common.sh@1323 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:06.441 19:17:43 -- common/autotest_common.sh@1324 -- # break 00:17:06.441 19:17:43 -- common/autotest_common.sh@1329 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:06.441 19:17:43 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:06.441 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:06.441 fio-3.35 00:17:06.441 Starting 1 thread 00:17:24.528 00:17:24.528 test: (groupid=0, jobs=1): err= 0: pid=72818: Wed Feb 14 19:17:59 2024 00:17:24.529 read: IOPS=6631, BW=25.9MiB/s (27.2MB/s)(255MiB/9832msec) 00:17:24.529 slat (nsec): min=4435, max=38304, avg=6675.78, stdev=2222.79 00:17:24.529 clat (usec): min=771, max=36456, avg=19291.18, stdev=1020.14 00:17:24.529 lat (usec): min=776, max=36464, avg=19297.85, stdev=1020.15 00:17:24.529 clat percentiles (usec): 00:17:24.529 | 1.00th=[17957], 5.00th=[18482], 10.00th=[18482], 20.00th=[18744], 00:17:24.529 | 30.00th=[19006], 40.00th=[19006], 50.00th=[19268], 60.00th=[19268], 00:17:24.529 | 70.00th=[19530], 80.00th=[19530], 90.00th=[19792], 95.00th=[20055], 00:17:24.529 | 99.00th=[24249], 99.50th=[24511], 99.90th=[27395], 99.95th=[31851], 00:17:24.529 | 99.99th=[35914] 00:17:24.529 write: IOPS=12.0k, BW=46.8MiB/s (49.1MB/s)(256MiB/5469msec); 0 zone resets 00:17:24.529 slat (usec): min=5, max=543, avg= 9.25, stdev= 5.56 00:17:24.529 clat (usec): min=679, max=66576, avg=10620.17, stdev=13547.61 00:17:24.529 lat (usec): min=688, max=66584, avg=10629.42, stdev=13547.67 00:17:24.529 clat percentiles (usec): 00:17:24.529 | 1.00th=[ 963], 5.00th=[ 1156], 10.00th=[ 1270], 20.00th=[ 1467], 00:17:24.529 | 30.00th=[ 1663], 40.00th=[ 2147], 50.00th=[ 6718], 60.00th=[ 7701], 00:17:24.529 | 70.00th=[ 8979], 80.00th=[10814], 90.00th=[39060], 95.00th=[42206], 00:17:24.529 | 99.00th=[45876], 99.50th=[46924], 99.90th=[49546], 99.95th=[54789], 00:17:24.529 | 99.99th=[61604] 00:17:24.529 bw ( KiB/s): min=37528, max=69064, per=99.44%, avg=47662.55, stdev=10365.74, samples=11 00:17:24.529 iops : min= 9382, max=17266, avg=11915.64, stdev=2591.44, samples=11 00:17:24.529 lat (usec) : 750=0.01%, 1000=0.75% 00:17:24.529 lat (msec) : 2=18.62%, 4=1.55%, 10=16.94%, 20=50.73%, 50=11.36% 00:17:24.529 lat (msec) : 100=0.04% 00:17:24.529 cpu : usr=98.82%, sys=0.57%, ctx=25, majf=0, minf=5567 00:17:24.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:24.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.529 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:24.529 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:24.529 00:17:24.529 Run status group 0 (all jobs): 00:17:24.529 READ: bw=25.9MiB/s (27.2MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=255MiB (267MB), run=9832-9832msec 00:17:24.529 WRITE: bw=46.8MiB/s (49.1MB/s), 46.8MiB/s-46.8MiB/s (49.1MB/s-49.1MB/s), io=256MiB (268MB), run=5469-5469msec 00:17:24.529 ----------------------------------------------------- 00:17:24.529 Suppressions used: 00:17:24.529 count bytes template 00:17:24.529 1 5 /usr/src/fio/parse.c 00:17:24.529 2 192 /usr/src/fio/iolog.c 00:17:24.529 1 8 libtcmalloc_minimal.so 00:17:24.529 1 904 libcrypto.so 00:17:24.529 ----------------------------------------------------- 00:17:24.529 00:17:24.529 19:18:01 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:24.529 19:18:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:24.529 19:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:24.529 19:18:01 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:24.529 19:18:01 -- ftl/fio.sh@85 -- # remove_shm 00:17:24.529 Remove shared memory files 00:17:24.529 19:18:01 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:24.529 19:18:01 -- ftl/common.sh@205 -- # rm -f rm -f 00:17:24.529 19:18:01 -- ftl/common.sh@206 -- # rm -f rm -f 00:17:24.529 19:18:01 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56936 /dev/shm/spdk_tgt_trace.pid71038 00:17:24.529 19:18:01 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:24.529 19:18:01 -- ftl/common.sh@209 -- # rm -f rm -f 00:17:24.529 00:17:24.529 real 1m12.224s 00:17:24.529 user 2m42.726s 00:17:24.529 sys 0m3.739s 00:17:24.529 19:18:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:24.529 19:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:24.529 ************************************ 00:17:24.529 END TEST ftl_fio_basic 00:17:24.529 ************************************ 00:17:24.529 19:18:01 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:17:24.529 19:18:01 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:17:24.529 19:18:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:24.529 19:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:24.529 ************************************ 00:17:24.529 START TEST ftl_bdevperf 00:17:24.529 ************************************ 00:17:24.529 19:18:01 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:17:24.529 * Looking for test storage... 00:17:24.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.529 19:18:01 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:24.529 19:18:01 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:24.529 19:18:01 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.529 19:18:01 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.529 19:18:01 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:24.529 19:18:01 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:24.529 19:18:01 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.529 19:18:01 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:24.529 19:18:01 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:24.529 19:18:01 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.529 19:18:01 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.529 19:18:01 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:24.529 19:18:01 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:24.529 19:18:01 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:24.529 19:18:01 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:24.529 19:18:01 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:24.529 19:18:01 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:24.529 19:18:01 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.529 19:18:01 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.529 19:18:01 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:24.529 19:18:01 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:24.529 19:18:01 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:24.529 19:18:01 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:24.529 19:18:01 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:24.529 19:18:01 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:24.529 19:18:01 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:24.529 19:18:01 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:24.529 19:18:01 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:24.529 19:18:01 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:24.529 19:18:01 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:17:24.529 19:18:01 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:17:24.529 19:18:01 -- ftl/bdevperf.sh@13 -- # use_append= 00:17:24.529 19:18:01 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.529 19:18:01 -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:24.529 19:18:01 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:24.529 19:18:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:24.529 19:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:24.529 19:18:01 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=73061 00:17:24.529 19:18:01 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:24.529 19:18:01 -- ftl/bdevperf.sh@22 -- # waitforlisten 73061 00:17:24.529 19:18:01 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:24.529 19:18:01 -- common/autotest_common.sh@817 -- # '[' -z 73061 ']' 00:17:24.529 19:18:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.529 19:18:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:24.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.529 19:18:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.529 19:18:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:24.529 19:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:24.529 [2024-02-14 19:18:01.693323] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:24.529 [2024-02-14 19:18:01.693508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73061 ] 00:17:24.529 [2024-02-14 19:18:01.861937] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.789 [2024-02-14 19:18:02.038351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.357 19:18:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:25.357 19:18:02 -- common/autotest_common.sh@850 -- # return 0 00:17:25.357 19:18:02 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:25.357 19:18:02 -- ftl/common.sh@54 -- # local name=nvme0 00:17:25.357 19:18:02 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:25.357 19:18:02 -- ftl/common.sh@56 -- # local size=103424 00:17:25.357 19:18:02 -- ftl/common.sh@59 -- # local base_bdev 00:17:25.357 19:18:02 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:25.646 19:18:02 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:25.646 19:18:02 -- ftl/common.sh@62 -- # local base_size 00:17:25.646 19:18:02 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:25.646 19:18:02 -- common/autotest_common.sh@1355 -- # local bdev_name=nvme0n1 00:17:25.646 19:18:02 -- common/autotest_common.sh@1356 -- # local bdev_info 00:17:25.646 19:18:02 -- common/autotest_common.sh@1357 -- # local bs 00:17:25.646 19:18:02 -- common/autotest_common.sh@1358 -- # local nb 00:17:25.646 19:18:02 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:25.905 19:18:03 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:17:25.905 { 00:17:25.905 "name": "nvme0n1", 00:17:25.905 "aliases": [ 00:17:25.905 "ad97797f-142d-4e74-9783-00aa69ecd117" 00:17:25.905 ], 00:17:25.905 "product_name": "NVMe disk", 00:17:25.905 "block_size": 4096, 00:17:25.905 "num_blocks": 1310720, 00:17:25.905 "uuid": "ad97797f-142d-4e74-9783-00aa69ecd117", 00:17:25.905 "assigned_rate_limits": { 00:17:25.905 "rw_ios_per_sec": 0, 00:17:25.905 "rw_mbytes_per_sec": 0, 00:17:25.905 "r_mbytes_per_sec": 0, 00:17:25.905 "w_mbytes_per_sec": 0 00:17:25.905 }, 00:17:25.905 "claimed": true, 00:17:25.905 "claim_type": "read_many_write_one", 00:17:25.905 "zoned": false, 00:17:25.905 "supported_io_types": { 00:17:25.905 "read": true, 00:17:25.905 "write": true, 00:17:25.905 "unmap": true, 00:17:25.905 "write_zeroes": true, 00:17:25.905 "flush": true, 00:17:25.905 "reset": true, 00:17:25.905 "compare": true, 00:17:25.905 "compare_and_write": false, 00:17:25.905 "abort": true, 00:17:25.905 "nvme_admin": true, 00:17:25.905 "nvme_io": true 00:17:25.905 }, 00:17:25.905 "driver_specific": { 00:17:25.905 "nvme": [ 00:17:25.905 { 00:17:25.905 "pci_address": "0000:00:07.0", 00:17:25.905 "trid": { 00:17:25.905 "trtype": "PCIe", 00:17:25.905 "traddr": "0000:00:07.0" 00:17:25.905 }, 00:17:25.905 "ctrlr_data": { 00:17:25.905 "cntlid": 0, 00:17:25.905 "vendor_id": "0x1b36", 00:17:25.905 "model_number": "QEMU NVMe Ctrl", 00:17:25.905 "serial_number": "12341", 00:17:25.905 "firmware_revision": "8.0.0", 00:17:25.905 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:25.905 "oacs": { 00:17:25.905 "security": 0, 00:17:25.905 "format": 1, 00:17:25.905 "firmware": 0, 00:17:25.905 "ns_manage": 1 00:17:25.905 }, 00:17:25.905 "multi_ctrlr": false, 00:17:25.905 "ana_reporting": false 00:17:25.905 }, 00:17:25.905 "vs": { 00:17:25.905 "nvme_version": "1.4" 00:17:25.905 }, 00:17:25.905 "ns_data": { 00:17:25.905 "id": 1, 00:17:25.905 "can_share": false 00:17:25.905 } 00:17:25.905 } 00:17:25.905 ], 00:17:25.905 "mp_policy": "active_passive" 00:17:25.905 } 00:17:25.905 } 00:17:25.905 ]' 00:17:25.905 19:18:03 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:17:25.905 19:18:03 -- common/autotest_common.sh@1360 -- # bs=4096 00:17:25.905 19:18:03 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:17:26.164 19:18:03 -- common/autotest_common.sh@1361 -- # nb=1310720 00:17:26.164 19:18:03 -- common/autotest_common.sh@1364 -- # bdev_size=5120 00:17:26.164 19:18:03 -- common/autotest_common.sh@1365 -- # echo 5120 00:17:26.164 19:18:03 -- ftl/common.sh@63 -- # base_size=5120 00:17:26.164 19:18:03 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:26.164 19:18:03 -- ftl/common.sh@67 -- # clear_lvols 00:17:26.164 19:18:03 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:26.164 19:18:03 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:26.422 19:18:03 -- ftl/common.sh@28 -- # stores=113acd72-8889-465b-a948-705fe3b05d49 00:17:26.422 19:18:03 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:26.422 19:18:03 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 113acd72-8889-465b-a948-705fe3b05d49 00:17:26.681 19:18:03 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:26.681 19:18:04 -- ftl/common.sh@68 -- # lvs=af0d13ae-fd14-47a5-a611-85064069682f 00:17:26.681 19:18:04 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u af0d13ae-fd14-47a5-a611-85064069682f 00:17:26.941 19:18:04 -- ftl/bdevperf.sh@23 -- # split_bdev=dfa93e92-b800-42a2-9a25-02fc54ecd9d6 00:17:27.200 19:18:04 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 dfa93e92-b800-42a2-9a25-02fc54ecd9d6 00:17:27.200 19:18:04 -- ftl/common.sh@35 -- # local name=nvc0 00:17:27.200 19:18:04 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:27.200 19:18:04 -- ftl/common.sh@37 -- # local base_bdev=dfa93e92-b800-42a2-9a25-02fc54ecd9d6 00:17:27.200 19:18:04 -- ftl/common.sh@38 -- # local cache_size= 00:17:27.200 19:18:04 -- ftl/common.sh@41 -- # get_bdev_size dfa93e92-b800-42a2-9a25-02fc54ecd9d6 00:17:27.200 19:18:04 -- common/autotest_common.sh@1355 -- # local bdev_name=dfa93e92-b800-42a2-9a25-02fc54ecd9d6 00:17:27.200 19:18:04 -- common/autotest_common.sh@1356 -- # local bdev_info 00:17:27.200 19:18:04 -- common/autotest_common.sh@1357 -- # local bs 00:17:27.200 19:18:04 -- common/autotest_common.sh@1358 -- # local nb 00:17:27.200 19:18:04 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dfa93e92-b800-42a2-9a25-02fc54ecd9d6 00:17:27.200 19:18:04 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:17:27.200 { 00:17:27.200 "name": "dfa93e92-b800-42a2-9a25-02fc54ecd9d6", 00:17:27.200 "aliases": [ 00:17:27.200 "lvs/nvme0n1p0" 00:17:27.200 ], 00:17:27.200 "product_name": "Logical Volume", 00:17:27.200 "block_size": 4096, 00:17:27.200 "num_blocks": 26476544, 00:17:27.200 "uuid": "dfa93e92-b800-42a2-9a25-02fc54ecd9d6", 00:17:27.200 "assigned_rate_limits": { 00:17:27.200 "rw_ios_per_sec": 0, 00:17:27.200 "rw_mbytes_per_sec": 0, 00:17:27.200 "r_mbytes_per_sec": 0, 00:17:27.200 "w_mbytes_per_sec": 0 00:17:27.200 }, 00:17:27.200 "claimed": false, 00:17:27.200 "zoned": false, 00:17:27.200 "supported_io_types": { 00:17:27.200 "read": true, 00:17:27.200 "write": true, 00:17:27.200 "unmap": true, 00:17:27.200 "write_zeroes": true, 00:17:27.200 "flush": false, 00:17:27.200 "reset": true, 00:17:27.200 "compare": false, 00:17:27.200 "compare_and_write": false, 00:17:27.200 "abort": false, 00:17:27.200 "nvme_admin": false, 00:17:27.200 "nvme_io": false 00:17:27.200 }, 00:17:27.200 "driver_specific": { 00:17:27.200 "lvol": { 00:17:27.200 "lvol_store_uuid": "af0d13ae-fd14-47a5-a611-85064069682f", 00:17:27.200 "base_bdev": "nvme0n1", 00:17:27.200 "thin_provision": true, 00:17:27.200 "snapshot": false, 00:17:27.200 "clone": false, 00:17:27.200 "esnap_clone": false 00:17:27.200 } 00:17:27.200 } 00:17:27.200 } 00:17:27.200 ]' 00:17:27.200 19:18:04 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:17:27.200 19:18:04 -- common/autotest_common.sh@1360 -- # bs=4096 00:17:27.459 19:18:04 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:17:27.459 19:18:04 -- common/autotest_common.sh@1361 -- # nb=26476544 00:17:27.459 19:18:04 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:17:27.459 19:18:04 -- common/autotest_common.sh@1365 -- # echo 103424 00:17:27.459 19:18:04 -- ftl/common.sh@41 -- # local base_size=5171 00:17:27.459 19:18:04 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:27.459 19:18:04 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:27.718 19:18:04 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:27.718 19:18:04 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:27.718 19:18:04 -- ftl/common.sh@48 -- # get_bdev_size dfa93e92-b800-42a2-9a25-02fc54ecd9d6 00:17:27.718 19:18:04 -- common/autotest_common.sh@1355 -- # local bdev_name=dfa93e92-b800-42a2-9a25-02fc54ecd9d6 00:17:27.718 19:18:04 -- common/autotest_common.sh@1356 -- # local bdev_info 00:17:27.718 19:18:04 -- common/autotest_common.sh@1357 -- # local bs 00:17:27.718 19:18:04 -- common/autotest_common.sh@1358 -- # local nb 00:17:27.718 19:18:04 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dfa93e92-b800-42a2-9a25-02fc54ecd9d6 00:17:27.977 19:18:05 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:17:27.977 { 00:17:27.977 "name": "dfa93e92-b800-42a2-9a25-02fc54ecd9d6", 00:17:27.977 "aliases": [ 00:17:27.977 "lvs/nvme0n1p0" 00:17:27.977 ], 00:17:27.977 "product_name": "Logical Volume", 00:17:27.977 "block_size": 4096, 00:17:27.977 "num_blocks": 26476544, 00:17:27.977 "uuid": "dfa93e92-b800-42a2-9a25-02fc54ecd9d6", 00:17:27.977 "assigned_rate_limits": { 00:17:27.977 "rw_ios_per_sec": 0, 00:17:27.977 "rw_mbytes_per_sec": 0, 00:17:27.977 "r_mbytes_per_sec": 0, 00:17:27.977 "w_mbytes_per_sec": 0 00:17:27.977 }, 00:17:27.977 "claimed": false, 00:17:27.977 "zoned": false, 00:17:27.977 "supported_io_types": { 00:17:27.977 "read": true, 00:17:27.977 "write": true, 00:17:27.977 "unmap": true, 00:17:27.977 "write_zeroes": true, 00:17:27.977 "flush": false, 00:17:27.977 "reset": true, 00:17:27.977 "compare": false, 00:17:27.977 "compare_and_write": false, 00:17:27.977 "abort": false, 00:17:27.977 "nvme_admin": false, 00:17:27.977 "nvme_io": false 00:17:27.977 }, 00:17:27.977 "driver_specific": { 00:17:27.977 "lvol": { 00:17:27.977 "lvol_store_uuid": "af0d13ae-fd14-47a5-a611-85064069682f", 00:17:27.977 "base_bdev": "nvme0n1", 00:17:27.977 "thin_provision": true, 00:17:27.977 "snapshot": false, 00:17:27.977 "clone": false, 00:17:27.977 "esnap_clone": false 00:17:27.977 } 00:17:27.977 } 00:17:27.977 } 00:17:27.977 ]' 00:17:27.977 19:18:05 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:17:27.977 19:18:05 -- common/autotest_common.sh@1360 -- # bs=4096 00:17:27.977 19:18:05 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:17:27.977 19:18:05 -- common/autotest_common.sh@1361 -- # nb=26476544 00:17:27.977 19:18:05 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:17:27.977 19:18:05 -- common/autotest_common.sh@1365 -- # echo 103424 00:17:27.977 19:18:05 -- ftl/common.sh@48 -- # cache_size=5171 00:17:27.977 19:18:05 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:28.236 19:18:05 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:17:28.236 19:18:05 -- ftl/bdevperf.sh@26 -- # get_bdev_size dfa93e92-b800-42a2-9a25-02fc54ecd9d6 00:17:28.236 19:18:05 -- common/autotest_common.sh@1355 -- # local bdev_name=dfa93e92-b800-42a2-9a25-02fc54ecd9d6 00:17:28.236 19:18:05 -- common/autotest_common.sh@1356 -- # local bdev_info 00:17:28.236 19:18:05 -- common/autotest_common.sh@1357 -- # local bs 00:17:28.236 19:18:05 -- common/autotest_common.sh@1358 -- # local nb 00:17:28.236 19:18:05 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dfa93e92-b800-42a2-9a25-02fc54ecd9d6 00:17:28.495 19:18:05 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:17:28.495 { 00:17:28.495 "name": "dfa93e92-b800-42a2-9a25-02fc54ecd9d6", 00:17:28.495 "aliases": [ 00:17:28.495 "lvs/nvme0n1p0" 00:17:28.495 ], 00:17:28.495 "product_name": "Logical Volume", 00:17:28.495 "block_size": 4096, 00:17:28.495 "num_blocks": 26476544, 00:17:28.495 "uuid": "dfa93e92-b800-42a2-9a25-02fc54ecd9d6", 00:17:28.495 "assigned_rate_limits": { 00:17:28.495 "rw_ios_per_sec": 0, 00:17:28.495 "rw_mbytes_per_sec": 0, 00:17:28.495 "r_mbytes_per_sec": 0, 00:17:28.495 "w_mbytes_per_sec": 0 00:17:28.495 }, 00:17:28.495 "claimed": false, 00:17:28.495 "zoned": false, 00:17:28.495 "supported_io_types": { 00:17:28.495 "read": true, 00:17:28.495 "write": true, 00:17:28.495 "unmap": true, 00:17:28.495 "write_zeroes": true, 00:17:28.495 "flush": false, 00:17:28.495 "reset": true, 00:17:28.495 "compare": false, 00:17:28.495 "compare_and_write": false, 00:17:28.495 "abort": false, 00:17:28.495 "nvme_admin": false, 00:17:28.495 "nvme_io": false 00:17:28.495 }, 00:17:28.495 "driver_specific": { 00:17:28.495 "lvol": { 00:17:28.495 "lvol_store_uuid": "af0d13ae-fd14-47a5-a611-85064069682f", 00:17:28.495 "base_bdev": "nvme0n1", 00:17:28.495 "thin_provision": true, 00:17:28.495 "snapshot": false, 00:17:28.495 "clone": false, 00:17:28.495 "esnap_clone": false 00:17:28.495 } 00:17:28.495 } 00:17:28.495 } 00:17:28.495 ]' 00:17:28.495 19:18:05 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:17:28.495 19:18:05 -- common/autotest_common.sh@1360 -- # bs=4096 00:17:28.495 19:18:05 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:17:28.495 19:18:05 -- common/autotest_common.sh@1361 -- # nb=26476544 00:17:28.495 19:18:05 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:17:28.495 19:18:05 -- common/autotest_common.sh@1365 -- # echo 103424 00:17:28.495 19:18:05 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:17:28.495 19:18:05 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dfa93e92-b800-42a2-9a25-02fc54ecd9d6 -c nvc0n1p0 --l2p_dram_limit 20 00:17:28.755 [2024-02-14 19:18:06.055830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.755 [2024-02-14 19:18:06.055919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:28.755 [2024-02-14 19:18:06.055974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:28.755 [2024-02-14 19:18:06.055987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.755 [2024-02-14 19:18:06.056064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.755 [2024-02-14 19:18:06.056082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.755 [2024-02-14 19:18:06.056098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:28.755 [2024-02-14 19:18:06.056109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.755 [2024-02-14 19:18:06.056139] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:28.755 [2024-02-14 19:18:06.057170] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:28.755 [2024-02-14 19:18:06.057230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.755 [2024-02-14 19:18:06.057259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.755 [2024-02-14 19:18:06.057274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.093 ms 00:17:28.755 [2024-02-14 19:18:06.057285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.755 [2024-02-14 19:18:06.057418] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 55a49134-3cfd-4fab-9ef4-a56906152faa 00:17:28.755 [2024-02-14 19:18:06.058477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.755 [2024-02-14 19:18:06.058566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:28.755 [2024-02-14 19:18:06.058582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:28.756 [2024-02-14 19:18:06.058596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.756 [2024-02-14 19:18:06.063231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.756 [2024-02-14 19:18:06.063295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.756 [2024-02-14 19:18:06.063331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.588 ms 00:17:28.756 [2024-02-14 19:18:06.063345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.756 [2024-02-14 19:18:06.063476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.756 [2024-02-14 19:18:06.063499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.756 [2024-02-14 19:18:06.063512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:28.756 [2024-02-14 19:18:06.063576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.756 [2024-02-14 19:18:06.063670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.756 [2024-02-14 19:18:06.063691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:28.756 [2024-02-14 19:18:06.063705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:28.756 [2024-02-14 19:18:06.063721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.756 [2024-02-14 19:18:06.063752] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:28.756 [2024-02-14 19:18:06.068476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.756 [2024-02-14 19:18:06.068557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.756 [2024-02-14 19:18:06.068577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.730 ms 00:17:28.756 [2024-02-14 19:18:06.068589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.756 [2024-02-14 19:18:06.068634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.756 [2024-02-14 19:18:06.068650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:28.756 [2024-02-14 19:18:06.068665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:28.756 [2024-02-14 19:18:06.068677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.756 [2024-02-14 19:18:06.068730] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:28.756 [2024-02-14 19:18:06.068871] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:28.756 [2024-02-14 19:18:06.068898] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:28.756 [2024-02-14 19:18:06.068913] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:28.756 [2024-02-14 19:18:06.068930] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:28.756 [2024-02-14 19:18:06.068944] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:28.756 [2024-02-14 19:18:06.068959] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:28.756 [2024-02-14 19:18:06.068970] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:28.756 [2024-02-14 19:18:06.068983] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:28.756 [2024-02-14 19:18:06.068999] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:28.756 [2024-02-14 19:18:06.069013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.756 [2024-02-14 19:18:06.069025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:28.756 [2024-02-14 19:18:06.069039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:17:28.756 [2024-02-14 19:18:06.069050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.756 [2024-02-14 19:18:06.069123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.756 [2024-02-14 19:18:06.069138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:28.756 [2024-02-14 19:18:06.069152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:28.756 [2024-02-14 19:18:06.069164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.756 [2024-02-14 19:18:06.069254] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:28.756 [2024-02-14 19:18:06.069270] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:28.756 [2024-02-14 19:18:06.069285] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.756 [2024-02-14 19:18:06.069297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.756 [2024-02-14 19:18:06.069311] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:28.756 [2024-02-14 19:18:06.069321] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:28.756 [2024-02-14 19:18:06.069334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:28.756 [2024-02-14 19:18:06.069345] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:28.756 [2024-02-14 19:18:06.069370] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:28.756 [2024-02-14 19:18:06.069381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.756 [2024-02-14 19:18:06.069393] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:28.756 [2024-02-14 19:18:06.069404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:28.756 [2024-02-14 19:18:06.069418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.756 [2024-02-14 19:18:06.069429] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:28.756 [2024-02-14 19:18:06.069442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:28.756 [2024-02-14 19:18:06.069452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.756 [2024-02-14 19:18:06.069466] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:28.756 [2024-02-14 19:18:06.069477] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:28.756 [2024-02-14 19:18:06.069516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.756 [2024-02-14 19:18:06.069530] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:28.756 [2024-02-14 19:18:06.069543] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:28.756 [2024-02-14 19:18:06.069554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:28.756 [2024-02-14 19:18:06.069566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:28.756 [2024-02-14 19:18:06.069578] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:28.756 [2024-02-14 19:18:06.069603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:28.756 [2024-02-14 19:18:06.069615] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:28.756 [2024-02-14 19:18:06.069642] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:28.756 [2024-02-14 19:18:06.069655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:28.756 [2024-02-14 19:18:06.069671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:28.756 [2024-02-14 19:18:06.069683] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:28.756 [2024-02-14 19:18:06.069699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:28.756 [2024-02-14 19:18:06.069711] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:28.756 [2024-02-14 19:18:06.069726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:28.756 [2024-02-14 19:18:06.069736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:28.756 [2024-02-14 19:18:06.069749] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:28.756 [2024-02-14 19:18:06.069759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:28.756 [2024-02-14 19:18:06.069772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.756 [2024-02-14 19:18:06.069782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:28.756 [2024-02-14 19:18:06.069797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:28.756 [2024-02-14 19:18:06.069808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.756 [2024-02-14 19:18:06.069820] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:28.756 [2024-02-14 19:18:06.069832] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:28.756 [2024-02-14 19:18:06.069845] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.756 [2024-02-14 19:18:06.069857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.756 [2024-02-14 19:18:06.069871] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:28.756 [2024-02-14 19:18:06.069881] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:28.756 [2024-02-14 19:18:06.069894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:28.756 [2024-02-14 19:18:06.069904] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:28.756 [2024-02-14 19:18:06.069919] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:28.756 [2024-02-14 19:18:06.069930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:28.756 [2024-02-14 19:18:06.069948] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:28.756 [2024-02-14 19:18:06.069963] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.756 [2024-02-14 19:18:06.069981] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:28.756 [2024-02-14 19:18:06.069993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:28.757 [2024-02-14 19:18:06.070007] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:28.757 [2024-02-14 19:18:06.070019] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:28.757 [2024-02-14 19:18:06.070032] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:28.757 [2024-02-14 19:18:06.070044] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:28.757 [2024-02-14 19:18:06.070058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:28.757 [2024-02-14 19:18:06.070069] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:28.757 [2024-02-14 19:18:06.070086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:28.757 [2024-02-14 19:18:06.070098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:28.757 [2024-02-14 19:18:06.070113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:28.757 [2024-02-14 19:18:06.070125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:28.757 [2024-02-14 19:18:06.070140] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:28.757 [2024-02-14 19:18:06.070152] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:28.757 [2024-02-14 19:18:06.070166] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.757 [2024-02-14 19:18:06.070179] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:28.757 [2024-02-14 19:18:06.070192] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:28.757 [2024-02-14 19:18:06.070204] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:28.757 [2024-02-14 19:18:06.070218] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:28.757 [2024-02-14 19:18:06.070231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.757 [2024-02-14 19:18:06.070245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:28.757 [2024-02-14 19:18:06.070257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:17:28.757 [2024-02-14 19:18:06.070271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.757 [2024-02-14 19:18:06.087795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.757 [2024-02-14 19:18:06.087856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.757 [2024-02-14 19:18:06.087888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.476 ms 00:17:28.757 [2024-02-14 19:18:06.087916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.757 [2024-02-14 19:18:06.088031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.757 [2024-02-14 19:18:06.088052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:28.757 [2024-02-14 19:18:06.088065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:28.757 [2024-02-14 19:18:06.088078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.757 [2024-02-14 19:18:06.134567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.757 [2024-02-14 19:18:06.134641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.757 [2024-02-14 19:18:06.134674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.406 ms 00:17:28.757 [2024-02-14 19:18:06.134687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.757 [2024-02-14 19:18:06.134732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.757 [2024-02-14 19:18:06.134750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.757 [2024-02-14 19:18:06.134765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.757 [2024-02-14 19:18:06.134777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.757 [2024-02-14 19:18:06.135210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.757 [2024-02-14 19:18:06.135243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.757 [2024-02-14 19:18:06.135258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:17:28.757 [2024-02-14 19:18:06.135271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.757 [2024-02-14 19:18:06.135406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.757 [2024-02-14 19:18:06.135430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.757 [2024-02-14 19:18:06.135443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:17:28.757 [2024-02-14 19:18:06.135459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.757 [2024-02-14 19:18:06.150873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.757 [2024-02-14 19:18:06.150943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.757 [2024-02-14 19:18:06.150976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.392 ms 00:17:28.757 [2024-02-14 19:18:06.150989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.757 [2024-02-14 19:18:06.162893] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:28.757 [2024-02-14 19:18:06.167865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.757 [2024-02-14 19:18:06.167899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:28.757 [2024-02-14 19:18:06.167933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.781 ms 00:17:28.757 [2024-02-14 19:18:06.167946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.017 [2024-02-14 19:18:06.232137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.017 [2024-02-14 19:18:06.232216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:29.017 [2024-02-14 19:18:06.232253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.151 ms 00:17:29.017 [2024-02-14 19:18:06.232265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.017 [2024-02-14 19:18:06.232319] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:29.017 [2024-02-14 19:18:06.232338] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:31.550 [2024-02-14 19:18:08.586793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.550 [2024-02-14 19:18:08.586874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:31.550 [2024-02-14 19:18:08.586911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2354.485 ms 00:17:31.550 [2024-02-14 19:18:08.586923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.550 [2024-02-14 19:18:08.587142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.550 [2024-02-14 19:18:08.587176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:31.550 [2024-02-14 19:18:08.587192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:17:31.550 [2024-02-14 19:18:08.587203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.550 [2024-02-14 19:18:08.614959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.550 [2024-02-14 19:18:08.615011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:31.550 [2024-02-14 19:18:08.615044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.688 ms 00:17:31.550 [2024-02-14 19:18:08.615058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.550 [2024-02-14 19:18:08.641912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.550 [2024-02-14 19:18:08.641968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:31.550 [2024-02-14 19:18:08.642020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.808 ms 00:17:31.550 [2024-02-14 19:18:08.642032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.550 [2024-02-14 19:18:08.642458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.550 [2024-02-14 19:18:08.642512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:31.550 [2024-02-14 19:18:08.642534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:17:31.550 [2024-02-14 19:18:08.642546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.550 [2024-02-14 19:18:08.714750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.550 [2024-02-14 19:18:08.714809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:31.550 [2024-02-14 19:18:08.714829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.144 ms 00:17:31.550 [2024-02-14 19:18:08.714841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.550 [2024-02-14 19:18:08.744208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.550 [2024-02-14 19:18:08.744259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:31.550 [2024-02-14 19:18:08.744278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.261 ms 00:17:31.550 [2024-02-14 19:18:08.744289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.550 [2024-02-14 19:18:08.746338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.551 [2024-02-14 19:18:08.746387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:31.551 [2024-02-14 19:18:08.746405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.006 ms 00:17:31.551 [2024-02-14 19:18:08.746416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.551 [2024-02-14 19:18:08.775303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.551 [2024-02-14 19:18:08.775354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:31.551 [2024-02-14 19:18:08.775373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.824 ms 00:17:31.551 [2024-02-14 19:18:08.775385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.551 [2024-02-14 19:18:08.775436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.551 [2024-02-14 19:18:08.775452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:31.551 [2024-02-14 19:18:08.775469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:31.551 [2024-02-14 19:18:08.775495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.551 [2024-02-14 19:18:08.775655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.551 [2024-02-14 19:18:08.775674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:31.551 [2024-02-14 19:18:08.775689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:31.551 [2024-02-14 19:18:08.775700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.551 [2024-02-14 19:18:08.776898] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2720.521 ms, result 0 00:17:31.551 { 00:17:31.551 "name": "ftl0", 00:17:31.551 "uuid": "55a49134-3cfd-4fab-9ef4-a56906152faa" 00:17:31.551 } 00:17:31.551 19:18:08 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:31.551 19:18:08 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:17:31.551 19:18:08 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:17:31.810 19:18:09 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:32.069 [2024-02-14 19:18:09.241039] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:32.069 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:32.069 Zero copy mechanism will not be used. 00:17:32.069 Running I/O for 4 seconds... 00:17:36.259 00:17:36.259 Latency(us) 00:17:36.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.259 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:36.260 ftl0 : 4.00 1742.03 115.68 0.00 0.00 601.75 245.76 6881.28 00:17:36.260 =================================================================================================================== 00:17:36.260 Total : 1742.03 115.68 0.00 0.00 601.75 245.76 6881.28 00:17:36.260 [2024-02-14 19:18:13.250381] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:36.260 0 00:17:36.260 19:18:13 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:36.260 [2024-02-14 19:18:13.376119] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:36.260 Running I/O for 4 seconds... 00:17:40.445 00:17:40.445 Latency(us) 00:17:40.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.445 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.445 ftl0 : 4.02 7839.92 30.62 0.00 0.00 16285.84 336.99 31218.97 00:17:40.445 =================================================================================================================== 00:17:40.445 Total : 7839.92 30.62 0.00 0.00 16285.84 0.00 31218.97 00:17:40.445 [2024-02-14 19:18:17.402716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:40.445 0 00:17:40.445 19:18:17 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:40.445 [2024-02-14 19:18:17.539594] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:40.445 Running I/O for 4 seconds... 00:17:44.669 00:17:44.669 Latency(us) 00:17:44.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.669 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.669 Verification LBA range: start 0x0 length 0x1400000 00:17:44.669 ftl0 : 4.01 8646.44 33.78 0.00 0.00 14762.97 262.52 18230.92 00:17:44.669 =================================================================================================================== 00:17:44.669 Total : 8646.44 33.78 0.00 0.00 14762.97 0.00 18230.92 00:17:44.669 [2024-02-14 19:18:21.564461] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:44.669 0 00:17:44.669 19:18:21 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:44.669 [2024-02-14 19:18:21.824469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.669 [2024-02-14 19:18:21.824580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:44.669 [2024-02-14 19:18:21.824606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:44.669 [2024-02-14 19:18:21.824619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.669 [2024-02-14 19:18:21.824654] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.669 [2024-02-14 19:18:21.827679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.669 [2024-02-14 19:18:21.827717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:44.669 [2024-02-14 19:18:21.827748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.989 ms 00:17:44.669 [2024-02-14 19:18:21.827763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.669 [2024-02-14 19:18:21.829602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.669 [2024-02-14 19:18:21.829685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:44.669 [2024-02-14 19:18:21.829703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.797 ms 00:17:44.669 [2024-02-14 19:18:21.829717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.669 [2024-02-14 19:18:22.000251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.669 [2024-02-14 19:18:22.000319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:44.669 [2024-02-14 19:18:22.000341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 170.509 ms 00:17:44.669 [2024-02-14 19:18:22.000359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.669 [2024-02-14 19:18:22.006822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.669 [2024-02-14 19:18:22.006859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:44.669 [2024-02-14 19:18:22.006906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.419 ms 00:17:44.669 [2024-02-14 19:18:22.006920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.669 [2024-02-14 19:18:22.035569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.669 [2024-02-14 19:18:22.035628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:44.669 [2024-02-14 19:18:22.035646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.569 ms 00:17:44.669 [2024-02-14 19:18:22.035661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.669 [2024-02-14 19:18:22.052744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.669 [2024-02-14 19:18:22.052805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:44.669 [2024-02-14 19:18:22.052822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.042 ms 00:17:44.669 [2024-02-14 19:18:22.052835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.669 [2024-02-14 19:18:22.052988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.669 [2024-02-14 19:18:22.053015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:44.669 [2024-02-14 19:18:22.053028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:17:44.669 [2024-02-14 19:18:22.053040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.669 [2024-02-14 19:18:22.081948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.669 [2024-02-14 19:18:22.082008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:44.669 [2024-02-14 19:18:22.082026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.888 ms 00:17:44.669 [2024-02-14 19:18:22.082039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.929 [2024-02-14 19:18:22.111100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.929 [2024-02-14 19:18:22.111143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:44.929 [2024-02-14 19:18:22.111175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.019 ms 00:17:44.929 [2024-02-14 19:18:22.111189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.929 [2024-02-14 19:18:22.139509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.929 [2024-02-14 19:18:22.139574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:44.929 [2024-02-14 19:18:22.139592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.263 ms 00:17:44.929 [2024-02-14 19:18:22.139604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.929 [2024-02-14 19:18:22.167350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.929 [2024-02-14 19:18:22.167408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:44.929 [2024-02-14 19:18:22.167424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.647 ms 00:17:44.929 [2024-02-14 19:18:22.167436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.929 [2024-02-14 19:18:22.167476] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:44.929 [2024-02-14 19:18:22.167532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:44.929 [2024-02-14 19:18:22.167911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.167922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.167934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.167946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.167958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.167969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.167981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.167991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:44.930 [2024-02-14 19:18:22.168821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:44.930 [2024-02-14 19:18:22.168832] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 55a49134-3cfd-4fab-9ef4-a56906152faa 00:17:44.930 [2024-02-14 19:18:22.168848] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:44.930 [2024-02-14 19:18:22.168859] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:44.930 [2024-02-14 19:18:22.168871] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:44.930 [2024-02-14 19:18:22.168881] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:44.930 [2024-02-14 19:18:22.168893] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:44.930 [2024-02-14 19:18:22.168919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:44.930 [2024-02-14 19:18:22.168948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:44.930 [2024-02-14 19:18:22.168957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:44.930 [2024-02-14 19:18:22.168968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:44.930 [2024-02-14 19:18:22.168978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.930 [2024-02-14 19:18:22.168990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:44.930 [2024-02-14 19:18:22.169001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:17:44.930 [2024-02-14 19:18:22.169012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.930 [2024-02-14 19:18:22.183705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.930 [2024-02-14 19:18:22.183745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:44.930 [2024-02-14 19:18:22.183777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.642 ms 00:17:44.930 [2024-02-14 19:18:22.183795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.930 [2024-02-14 19:18:22.183994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.930 [2024-02-14 19:18:22.184012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:44.931 [2024-02-14 19:18:22.184023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:17:44.931 [2024-02-14 19:18:22.184035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.226236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.931 [2024-02-14 19:18:22.226296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.931 [2024-02-14 19:18:22.226329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.931 [2024-02-14 19:18:22.226345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.226398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.931 [2024-02-14 19:18:22.226446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.931 [2024-02-14 19:18:22.226458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.931 [2024-02-14 19:18:22.226470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.226586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.931 [2024-02-14 19:18:22.226625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.931 [2024-02-14 19:18:22.226638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.931 [2024-02-14 19:18:22.226653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.226678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.931 [2024-02-14 19:18:22.226694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.931 [2024-02-14 19:18:22.226706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.931 [2024-02-14 19:18:22.226717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.310021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.931 [2024-02-14 19:18:22.310098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.931 [2024-02-14 19:18:22.310115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.931 [2024-02-14 19:18:22.310132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.343750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.931 [2024-02-14 19:18:22.343809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.931 [2024-02-14 19:18:22.343825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.931 [2024-02-14 19:18:22.343838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.343912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.931 [2024-02-14 19:18:22.343933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.931 [2024-02-14 19:18:22.343945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.931 [2024-02-14 19:18:22.343959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.344027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.931 [2024-02-14 19:18:22.344050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.931 [2024-02-14 19:18:22.344062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.931 [2024-02-14 19:18:22.344074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.344197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.931 [2024-02-14 19:18:22.344220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.931 [2024-02-14 19:18:22.344232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.931 [2024-02-14 19:18:22.344245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.344291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.931 [2024-02-14 19:18:22.344315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:44.931 [2024-02-14 19:18:22.344326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.931 [2024-02-14 19:18:22.344339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.344379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.931 [2024-02-14 19:18:22.344397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.931 [2024-02-14 19:18:22.344409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.931 [2024-02-14 19:18:22.344424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.344475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.931 [2024-02-14 19:18:22.344498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.931 [2024-02-14 19:18:22.344510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.931 [2024-02-14 19:18:22.344522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.931 [2024-02-14 19:18:22.344732] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.225 ms, result 0 00:17:45.191 true 00:17:45.191 19:18:22 -- ftl/bdevperf.sh@37 -- # killprocess 73061 00:17:45.191 19:18:22 -- common/autotest_common.sh@924 -- # '[' -z 73061 ']' 00:17:45.191 19:18:22 -- common/autotest_common.sh@928 -- # kill -0 73061 00:17:45.191 19:18:22 -- common/autotest_common.sh@929 -- # uname 00:17:45.191 19:18:22 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:45.191 19:18:22 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 73061 00:17:45.191 killing process with pid 73061 00:17:45.191 Received shutdown signal, test time was about 4.000000 seconds 00:17:45.191 00:17:45.191 Latency(us) 00:17:45.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.191 =================================================================================================================== 00:17:45.191 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.191 19:18:22 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:45.191 19:18:22 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:45.191 19:18:22 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 73061' 00:17:45.191 19:18:22 -- common/autotest_common.sh@943 -- # kill 73061 00:17:45.191 19:18:22 -- common/autotest_common.sh@948 -- # wait 73061 00:17:46.126 19:18:23 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:17:46.126 19:18:23 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:46.126 19:18:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:46.126 19:18:23 -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 19:18:23 -- ftl/bdevperf.sh@41 -- # remove_shm 00:17:46.126 19:18:23 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:46.126 Remove shared memory files 00:17:46.126 19:18:23 -- ftl/common.sh@205 -- # rm -f rm -f 00:17:46.126 19:18:23 -- ftl/common.sh@206 -- # rm -f rm -f 00:17:46.126 19:18:23 -- ftl/common.sh@207 -- # rm -f rm -f 00:17:46.126 19:18:23 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:46.126 19:18:23 -- ftl/common.sh@209 -- # rm -f rm -f 00:17:46.126 ************************************ 00:17:46.126 END TEST ftl_bdevperf 00:17:46.126 ************************************ 00:17:46.126 00:17:46.126 real 0m21.930s 00:17:46.126 user 0m25.457s 00:17:46.126 sys 0m0.973s 00:17:46.126 19:18:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:46.126 19:18:23 -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 19:18:23 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:17:46.126 19:18:23 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:17:46.126 19:18:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:46.126 19:18:23 -- common/autotest_common.sh@10 -- # set +x 00:17:46.126 ************************************ 00:17:46.126 START TEST ftl_trim 00:17:46.126 ************************************ 00:17:46.126 19:18:23 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:17:46.385 * Looking for test storage... 00:17:46.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:46.385 19:18:23 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:46.385 19:18:23 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:46.385 19:18:23 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:46.385 19:18:23 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:46.385 19:18:23 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:46.385 19:18:23 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:46.385 19:18:23 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:46.385 19:18:23 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:46.385 19:18:23 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:46.385 19:18:23 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.385 19:18:23 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.385 19:18:23 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:46.385 19:18:23 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:46.385 19:18:23 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:46.385 19:18:23 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:46.385 19:18:23 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:46.385 19:18:23 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:46.385 19:18:23 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.385 19:18:23 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.386 19:18:23 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:46.386 19:18:23 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:46.386 19:18:23 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:46.386 19:18:23 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:46.386 19:18:23 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:46.386 19:18:23 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:46.386 19:18:23 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:46.386 19:18:23 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:46.386 19:18:23 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:46.386 19:18:23 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:46.386 19:18:23 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:46.386 19:18:23 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:17:46.386 19:18:23 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:17:46.386 19:18:23 -- ftl/trim.sh@25 -- # timeout=240 00:17:46.386 19:18:23 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:46.386 19:18:23 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:46.386 19:18:23 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:46.386 19:18:23 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:46.386 19:18:23 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:46.386 19:18:23 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:46.386 19:18:23 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:46.386 19:18:23 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:46.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.386 19:18:23 -- ftl/trim.sh@40 -- # svcpid=73408 00:17:46.386 19:18:23 -- ftl/trim.sh@41 -- # waitforlisten 73408 00:17:46.386 19:18:23 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:46.386 19:18:23 -- common/autotest_common.sh@817 -- # '[' -z 73408 ']' 00:17:46.386 19:18:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.386 19:18:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:46.386 19:18:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.386 19:18:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:46.386 19:18:23 -- common/autotest_common.sh@10 -- # set +x 00:17:46.386 [2024-02-14 19:18:23.687349] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:46.386 [2024-02-14 19:18:23.687805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73408 ] 00:17:46.644 [2024-02-14 19:18:23.855235] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:46.644 [2024-02-14 19:18:24.016254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:46.644 [2024-02-14 19:18:24.016648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.644 [2024-02-14 19:18:24.016853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.644 [2024-02-14 19:18:24.016867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.021 19:18:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:48.021 19:18:25 -- common/autotest_common.sh@850 -- # return 0 00:17:48.021 19:18:25 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:48.021 19:18:25 -- ftl/common.sh@54 -- # local name=nvme0 00:17:48.021 19:18:25 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:48.021 19:18:25 -- ftl/common.sh@56 -- # local size=103424 00:17:48.021 19:18:25 -- ftl/common.sh@59 -- # local base_bdev 00:17:48.021 19:18:25 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:48.588 19:18:25 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:48.589 19:18:25 -- ftl/common.sh@62 -- # local base_size 00:17:48.589 19:18:25 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:48.589 19:18:25 -- common/autotest_common.sh@1355 -- # local bdev_name=nvme0n1 00:17:48.589 19:18:25 -- common/autotest_common.sh@1356 -- # local bdev_info 00:17:48.589 19:18:25 -- common/autotest_common.sh@1357 -- # local bs 00:17:48.589 19:18:25 -- common/autotest_common.sh@1358 -- # local nb 00:17:48.589 19:18:25 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:48.589 19:18:25 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:17:48.589 { 00:17:48.589 "name": "nvme0n1", 00:17:48.589 "aliases": [ 00:17:48.589 "d2d4fe4b-a1ef-4726-be83-4c9dbb611a46" 00:17:48.589 ], 00:17:48.589 "product_name": "NVMe disk", 00:17:48.589 "block_size": 4096, 00:17:48.589 "num_blocks": 1310720, 00:17:48.589 "uuid": "d2d4fe4b-a1ef-4726-be83-4c9dbb611a46", 00:17:48.589 "assigned_rate_limits": { 00:17:48.589 "rw_ios_per_sec": 0, 00:17:48.589 "rw_mbytes_per_sec": 0, 00:17:48.589 "r_mbytes_per_sec": 0, 00:17:48.589 "w_mbytes_per_sec": 0 00:17:48.589 }, 00:17:48.589 "claimed": true, 00:17:48.589 "claim_type": "read_many_write_one", 00:17:48.589 "zoned": false, 00:17:48.589 "supported_io_types": { 00:17:48.589 "read": true, 00:17:48.589 "write": true, 00:17:48.589 "unmap": true, 00:17:48.589 "write_zeroes": true, 00:17:48.589 "flush": true, 00:17:48.589 "reset": true, 00:17:48.589 "compare": true, 00:17:48.589 "compare_and_write": false, 00:17:48.589 "abort": true, 00:17:48.589 "nvme_admin": true, 00:17:48.589 "nvme_io": true 00:17:48.589 }, 00:17:48.589 "driver_specific": { 00:17:48.589 "nvme": [ 00:17:48.589 { 00:17:48.589 "pci_address": "0000:00:07.0", 00:17:48.589 "trid": { 00:17:48.589 "trtype": "PCIe", 00:17:48.589 "traddr": "0000:00:07.0" 00:17:48.589 }, 00:17:48.589 "ctrlr_data": { 00:17:48.589 "cntlid": 0, 00:17:48.589 "vendor_id": "0x1b36", 00:17:48.589 "model_number": "QEMU NVMe Ctrl", 00:17:48.589 "serial_number": "12341", 00:17:48.589 "firmware_revision": "8.0.0", 00:17:48.589 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:48.589 "oacs": { 00:17:48.589 "security": 0, 00:17:48.589 "format": 1, 00:17:48.589 "firmware": 0, 00:17:48.589 "ns_manage": 1 00:17:48.589 }, 00:17:48.589 "multi_ctrlr": false, 00:17:48.589 "ana_reporting": false 00:17:48.589 }, 00:17:48.589 "vs": { 00:17:48.589 "nvme_version": "1.4" 00:17:48.589 }, 00:17:48.589 "ns_data": { 00:17:48.589 "id": 1, 00:17:48.589 "can_share": false 00:17:48.589 } 00:17:48.589 } 00:17:48.589 ], 00:17:48.589 "mp_policy": "active_passive" 00:17:48.589 } 00:17:48.589 } 00:17:48.589 ]' 00:17:48.589 19:18:25 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:17:48.589 19:18:25 -- common/autotest_common.sh@1360 -- # bs=4096 00:17:48.589 19:18:25 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:17:48.848 19:18:26 -- common/autotest_common.sh@1361 -- # nb=1310720 00:17:48.848 19:18:26 -- common/autotest_common.sh@1364 -- # bdev_size=5120 00:17:48.848 19:18:26 -- common/autotest_common.sh@1365 -- # echo 5120 00:17:48.848 19:18:26 -- ftl/common.sh@63 -- # base_size=5120 00:17:48.848 19:18:26 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:48.848 19:18:26 -- ftl/common.sh@67 -- # clear_lvols 00:17:48.848 19:18:26 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:48.848 19:18:26 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:48.848 19:18:26 -- ftl/common.sh@28 -- # stores=af0d13ae-fd14-47a5-a611-85064069682f 00:17:48.848 19:18:26 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:48.848 19:18:26 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af0d13ae-fd14-47a5-a611-85064069682f 00:17:49.107 19:18:26 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:49.366 19:18:26 -- ftl/common.sh@68 -- # lvs=2a251c8e-2a35-4085-85cd-c67f0c4a697a 00:17:49.366 19:18:26 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2a251c8e-2a35-4085-85cd-c67f0c4a697a 00:17:49.625 19:18:27 -- ftl/trim.sh@43 -- # split_bdev=ed565204-c8cd-4cba-8013-ab9f31c85b17 00:17:49.625 19:18:27 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 ed565204-c8cd-4cba-8013-ab9f31c85b17 00:17:49.625 19:18:27 -- ftl/common.sh@35 -- # local name=nvc0 00:17:49.625 19:18:27 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:49.625 19:18:27 -- ftl/common.sh@37 -- # local base_bdev=ed565204-c8cd-4cba-8013-ab9f31c85b17 00:17:49.625 19:18:27 -- ftl/common.sh@38 -- # local cache_size= 00:17:49.625 19:18:27 -- ftl/common.sh@41 -- # get_bdev_size ed565204-c8cd-4cba-8013-ab9f31c85b17 00:17:49.625 19:18:27 -- common/autotest_common.sh@1355 -- # local bdev_name=ed565204-c8cd-4cba-8013-ab9f31c85b17 00:17:49.625 19:18:27 -- common/autotest_common.sh@1356 -- # local bdev_info 00:17:49.625 19:18:27 -- common/autotest_common.sh@1357 -- # local bs 00:17:49.625 19:18:27 -- common/autotest_common.sh@1358 -- # local nb 00:17:49.625 19:18:27 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ed565204-c8cd-4cba-8013-ab9f31c85b17 00:17:49.884 19:18:27 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:17:49.884 { 00:17:49.884 "name": "ed565204-c8cd-4cba-8013-ab9f31c85b17", 00:17:49.884 "aliases": [ 00:17:49.884 "lvs/nvme0n1p0" 00:17:49.884 ], 00:17:49.884 "product_name": "Logical Volume", 00:17:49.884 "block_size": 4096, 00:17:49.884 "num_blocks": 26476544, 00:17:49.884 "uuid": "ed565204-c8cd-4cba-8013-ab9f31c85b17", 00:17:49.884 "assigned_rate_limits": { 00:17:49.884 "rw_ios_per_sec": 0, 00:17:49.884 "rw_mbytes_per_sec": 0, 00:17:49.884 "r_mbytes_per_sec": 0, 00:17:49.884 "w_mbytes_per_sec": 0 00:17:49.884 }, 00:17:49.884 "claimed": false, 00:17:49.884 "zoned": false, 00:17:49.884 "supported_io_types": { 00:17:49.884 "read": true, 00:17:49.884 "write": true, 00:17:49.884 "unmap": true, 00:17:49.884 "write_zeroes": true, 00:17:49.884 "flush": false, 00:17:49.884 "reset": true, 00:17:49.884 "compare": false, 00:17:49.884 "compare_and_write": false, 00:17:49.884 "abort": false, 00:17:49.884 "nvme_admin": false, 00:17:49.884 "nvme_io": false 00:17:49.884 }, 00:17:49.884 "driver_specific": { 00:17:49.884 "lvol": { 00:17:49.884 "lvol_store_uuid": "2a251c8e-2a35-4085-85cd-c67f0c4a697a", 00:17:49.884 "base_bdev": "nvme0n1", 00:17:49.884 "thin_provision": true, 00:17:49.884 "snapshot": false, 00:17:49.884 "clone": false, 00:17:49.884 "esnap_clone": false 00:17:49.884 } 00:17:49.884 } 00:17:49.884 } 00:17:49.884 ]' 00:17:49.884 19:18:27 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:17:49.884 19:18:27 -- common/autotest_common.sh@1360 -- # bs=4096 00:17:49.884 19:18:27 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:17:50.144 19:18:27 -- common/autotest_common.sh@1361 -- # nb=26476544 00:17:50.144 19:18:27 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:17:50.144 19:18:27 -- common/autotest_common.sh@1365 -- # echo 103424 00:17:50.144 19:18:27 -- ftl/common.sh@41 -- # local base_size=5171 00:17:50.144 19:18:27 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:50.144 19:18:27 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:50.403 19:18:27 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:50.403 19:18:27 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:50.403 19:18:27 -- ftl/common.sh@48 -- # get_bdev_size ed565204-c8cd-4cba-8013-ab9f31c85b17 00:17:50.403 19:18:27 -- common/autotest_common.sh@1355 -- # local bdev_name=ed565204-c8cd-4cba-8013-ab9f31c85b17 00:17:50.403 19:18:27 -- common/autotest_common.sh@1356 -- # local bdev_info 00:17:50.403 19:18:27 -- common/autotest_common.sh@1357 -- # local bs 00:17:50.403 19:18:27 -- common/autotest_common.sh@1358 -- # local nb 00:17:50.403 19:18:27 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ed565204-c8cd-4cba-8013-ab9f31c85b17 00:17:50.662 19:18:27 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:17:50.662 { 00:17:50.662 "name": "ed565204-c8cd-4cba-8013-ab9f31c85b17", 00:17:50.662 "aliases": [ 00:17:50.662 "lvs/nvme0n1p0" 00:17:50.662 ], 00:17:50.662 "product_name": "Logical Volume", 00:17:50.662 "block_size": 4096, 00:17:50.662 "num_blocks": 26476544, 00:17:50.662 "uuid": "ed565204-c8cd-4cba-8013-ab9f31c85b17", 00:17:50.662 "assigned_rate_limits": { 00:17:50.662 "rw_ios_per_sec": 0, 00:17:50.662 "rw_mbytes_per_sec": 0, 00:17:50.662 "r_mbytes_per_sec": 0, 00:17:50.662 "w_mbytes_per_sec": 0 00:17:50.662 }, 00:17:50.662 "claimed": false, 00:17:50.662 "zoned": false, 00:17:50.662 "supported_io_types": { 00:17:50.662 "read": true, 00:17:50.662 "write": true, 00:17:50.662 "unmap": true, 00:17:50.662 "write_zeroes": true, 00:17:50.662 "flush": false, 00:17:50.662 "reset": true, 00:17:50.662 "compare": false, 00:17:50.662 "compare_and_write": false, 00:17:50.662 "abort": false, 00:17:50.662 "nvme_admin": false, 00:17:50.662 "nvme_io": false 00:17:50.662 }, 00:17:50.662 "driver_specific": { 00:17:50.662 "lvol": { 00:17:50.662 "lvol_store_uuid": "2a251c8e-2a35-4085-85cd-c67f0c4a697a", 00:17:50.662 "base_bdev": "nvme0n1", 00:17:50.662 "thin_provision": true, 00:17:50.662 "snapshot": false, 00:17:50.662 "clone": false, 00:17:50.662 "esnap_clone": false 00:17:50.662 } 00:17:50.662 } 00:17:50.662 } 00:17:50.662 ]' 00:17:50.662 19:18:27 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:17:50.662 19:18:27 -- common/autotest_common.sh@1360 -- # bs=4096 00:17:50.662 19:18:27 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:17:50.662 19:18:27 -- common/autotest_common.sh@1361 -- # nb=26476544 00:17:50.662 19:18:27 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:17:50.662 19:18:27 -- common/autotest_common.sh@1365 -- # echo 103424 00:17:50.662 19:18:27 -- ftl/common.sh@48 -- # cache_size=5171 00:17:50.662 19:18:27 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:50.921 19:18:28 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:50.921 19:18:28 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:50.921 19:18:28 -- ftl/trim.sh@47 -- # get_bdev_size ed565204-c8cd-4cba-8013-ab9f31c85b17 00:17:50.921 19:18:28 -- common/autotest_common.sh@1355 -- # local bdev_name=ed565204-c8cd-4cba-8013-ab9f31c85b17 00:17:50.921 19:18:28 -- common/autotest_common.sh@1356 -- # local bdev_info 00:17:50.921 19:18:28 -- common/autotest_common.sh@1357 -- # local bs 00:17:50.921 19:18:28 -- common/autotest_common.sh@1358 -- # local nb 00:17:50.921 19:18:28 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ed565204-c8cd-4cba-8013-ab9f31c85b17 00:17:51.181 19:18:28 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:17:51.181 { 00:17:51.181 "name": "ed565204-c8cd-4cba-8013-ab9f31c85b17", 00:17:51.181 "aliases": [ 00:17:51.181 "lvs/nvme0n1p0" 00:17:51.181 ], 00:17:51.181 "product_name": "Logical Volume", 00:17:51.181 "block_size": 4096, 00:17:51.181 "num_blocks": 26476544, 00:17:51.181 "uuid": "ed565204-c8cd-4cba-8013-ab9f31c85b17", 00:17:51.181 "assigned_rate_limits": { 00:17:51.181 "rw_ios_per_sec": 0, 00:17:51.181 "rw_mbytes_per_sec": 0, 00:17:51.181 "r_mbytes_per_sec": 0, 00:17:51.181 "w_mbytes_per_sec": 0 00:17:51.181 }, 00:17:51.181 "claimed": false, 00:17:51.181 "zoned": false, 00:17:51.181 "supported_io_types": { 00:17:51.181 "read": true, 00:17:51.181 "write": true, 00:17:51.181 "unmap": true, 00:17:51.181 "write_zeroes": true, 00:17:51.181 "flush": false, 00:17:51.181 "reset": true, 00:17:51.181 "compare": false, 00:17:51.181 "compare_and_write": false, 00:17:51.181 "abort": false, 00:17:51.181 "nvme_admin": false, 00:17:51.181 "nvme_io": false 00:17:51.181 }, 00:17:51.181 "driver_specific": { 00:17:51.181 "lvol": { 00:17:51.181 "lvol_store_uuid": "2a251c8e-2a35-4085-85cd-c67f0c4a697a", 00:17:51.181 "base_bdev": "nvme0n1", 00:17:51.181 "thin_provision": true, 00:17:51.181 "snapshot": false, 00:17:51.181 "clone": false, 00:17:51.181 "esnap_clone": false 00:17:51.181 } 00:17:51.181 } 00:17:51.181 } 00:17:51.181 ]' 00:17:51.181 19:18:28 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:17:51.181 19:18:28 -- common/autotest_common.sh@1360 -- # bs=4096 00:17:51.181 19:18:28 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:17:51.181 19:18:28 -- common/autotest_common.sh@1361 -- # nb=26476544 00:17:51.181 19:18:28 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:17:51.181 19:18:28 -- common/autotest_common.sh@1365 -- # echo 103424 00:17:51.181 19:18:28 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:51.181 19:18:28 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ed565204-c8cd-4cba-8013-ab9f31c85b17 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:51.441 [2024-02-14 19:18:28.713934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.441 [2024-02-14 19:18:28.714017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:51.441 [2024-02-14 19:18:28.714040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:51.441 [2024-02-14 19:18:28.714051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.441 [2024-02-14 19:18:28.717306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.441 [2024-02-14 19:18:28.717345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:51.441 [2024-02-14 19:18:28.717364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.204 ms 00:17:51.441 [2024-02-14 19:18:28.717376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.441 [2024-02-14 19:18:28.717543] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:51.441 [2024-02-14 19:18:28.718529] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:51.441 [2024-02-14 19:18:28.718571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.441 [2024-02-14 19:18:28.718586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:51.442 [2024-02-14 19:18:28.718600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:17:51.442 [2024-02-14 19:18:28.718611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.442 [2024-02-14 19:18:28.718833] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d840bd34-7af4-4abc-a654-d3a1fdd47b5d 00:17:51.442 [2024-02-14 19:18:28.719816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.442 [2024-02-14 19:18:28.719848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:51.442 [2024-02-14 19:18:28.719863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:51.442 [2024-02-14 19:18:28.719876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.442 [2024-02-14 19:18:28.724601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.442 [2024-02-14 19:18:28.724804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:51.442 [2024-02-14 19:18:28.724942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:17:51.442 [2024-02-14 19:18:28.725063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.442 [2024-02-14 19:18:28.725395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.442 [2024-02-14 19:18:28.725488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:51.442 [2024-02-14 19:18:28.725577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:17:51.442 [2024-02-14 19:18:28.725659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.442 [2024-02-14 19:18:28.725843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.442 [2024-02-14 19:18:28.725995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:51.442 [2024-02-14 19:18:28.726120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:51.442 [2024-02-14 19:18:28.726179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.442 [2024-02-14 19:18:28.726279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:51.442 [2024-02-14 19:18:28.730880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.442 [2024-02-14 19:18:28.731036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:51.442 [2024-02-14 19:18:28.731160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.608 ms 00:17:51.442 [2024-02-14 19:18:28.731289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.442 [2024-02-14 19:18:28.731431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.442 [2024-02-14 19:18:28.731504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:51.442 [2024-02-14 19:18:28.731691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:51.442 [2024-02-14 19:18:28.731829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.442 [2024-02-14 19:18:28.731942] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:51.442 [2024-02-14 19:18:28.732178] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:51.442 [2024-02-14 19:18:28.732330] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:51.442 [2024-02-14 19:18:28.732456] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:51.442 [2024-02-14 19:18:28.732527] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:51.442 [2024-02-14 19:18:28.732547] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:51.442 [2024-02-14 19:18:28.732567] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:51.442 [2024-02-14 19:18:28.732579] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:51.442 [2024-02-14 19:18:28.732605] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:51.442 [2024-02-14 19:18:28.732618] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:51.442 [2024-02-14 19:18:28.732639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.442 [2024-02-14 19:18:28.732653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:51.442 [2024-02-14 19:18:28.732671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:17:51.442 [2024-02-14 19:18:28.732684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.442 [2024-02-14 19:18:28.732794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.442 [2024-02-14 19:18:28.732810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:51.442 [2024-02-14 19:18:28.732843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:51.442 [2024-02-14 19:18:28.732855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.442 [2024-02-14 19:18:28.732980] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:51.442 [2024-02-14 19:18:28.732996] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:51.442 [2024-02-14 19:18:28.733011] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.442 [2024-02-14 19:18:28.733023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.442 [2024-02-14 19:18:28.733037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:51.442 [2024-02-14 19:18:28.733048] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:51.442 [2024-02-14 19:18:28.733060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:51.442 [2024-02-14 19:18:28.733071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:51.442 [2024-02-14 19:18:28.733084] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:51.442 [2024-02-14 19:18:28.733095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.442 [2024-02-14 19:18:28.733107] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:51.442 [2024-02-14 19:18:28.733118] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:51.442 [2024-02-14 19:18:28.733130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.442 [2024-02-14 19:18:28.733141] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:51.442 [2024-02-14 19:18:28.733153] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:51.442 [2024-02-14 19:18:28.733163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.442 [2024-02-14 19:18:28.733179] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:51.442 [2024-02-14 19:18:28.733190] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:51.442 [2024-02-14 19:18:28.733203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.442 [2024-02-14 19:18:28.733213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:51.442 [2024-02-14 19:18:28.733226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:51.442 [2024-02-14 19:18:28.733237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:51.442 [2024-02-14 19:18:28.733250] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:51.442 [2024-02-14 19:18:28.733261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:51.442 [2024-02-14 19:18:28.733273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:51.442 [2024-02-14 19:18:28.733284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:51.442 [2024-02-14 19:18:28.733311] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:51.442 [2024-02-14 19:18:28.733321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:51.442 [2024-02-14 19:18:28.733333] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:51.442 [2024-02-14 19:18:28.733343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:51.442 [2024-02-14 19:18:28.733355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:51.442 [2024-02-14 19:18:28.733364] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:51.442 [2024-02-14 19:18:28.733378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:51.442 [2024-02-14 19:18:28.733388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:51.442 [2024-02-14 19:18:28.733400] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:51.442 [2024-02-14 19:18:28.733411] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:51.442 [2024-02-14 19:18:28.733422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.442 [2024-02-14 19:18:28.733432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:51.442 [2024-02-14 19:18:28.733444] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:51.442 [2024-02-14 19:18:28.733455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.442 [2024-02-14 19:18:28.733467] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:51.442 [2024-02-14 19:18:28.733478] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:51.442 [2024-02-14 19:18:28.733491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.442 [2024-02-14 19:18:28.733502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.442 [2024-02-14 19:18:28.733520] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:51.442 [2024-02-14 19:18:28.733545] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:51.442 [2024-02-14 19:18:28.733561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:51.442 [2024-02-14 19:18:28.733572] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:51.442 [2024-02-14 19:18:28.733586] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:51.442 [2024-02-14 19:18:28.733597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:51.442 [2024-02-14 19:18:28.733613] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:51.442 [2024-02-14 19:18:28.733628] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.442 [2024-02-14 19:18:28.733689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:51.442 [2024-02-14 19:18:28.733703] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:51.442 [2024-02-14 19:18:28.733718] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:51.443 [2024-02-14 19:18:28.733730] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:51.443 [2024-02-14 19:18:28.733745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:51.443 [2024-02-14 19:18:28.733758] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:51.443 [2024-02-14 19:18:28.733778] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:51.443 [2024-02-14 19:18:28.733791] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:51.443 [2024-02-14 19:18:28.733805] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:51.443 [2024-02-14 19:18:28.733817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:51.443 [2024-02-14 19:18:28.733832] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:51.443 [2024-02-14 19:18:28.733845] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:51.443 [2024-02-14 19:18:28.733862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:51.443 [2024-02-14 19:18:28.733874] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:51.443 [2024-02-14 19:18:28.733891] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.443 [2024-02-14 19:18:28.733905] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:51.443 [2024-02-14 19:18:28.733920] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:51.443 [2024-02-14 19:18:28.733933] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:51.443 [2024-02-14 19:18:28.733947] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:51.443 [2024-02-14 19:18:28.733961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.443 [2024-02-14 19:18:28.733975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:51.443 [2024-02-14 19:18:28.734003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:17:51.443 [2024-02-14 19:18:28.734045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.443 [2024-02-14 19:18:28.751827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.443 [2024-02-14 19:18:28.751899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.443 [2024-02-14 19:18:28.751918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.692 ms 00:17:51.443 [2024-02-14 19:18:28.751932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.443 [2024-02-14 19:18:28.752101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.443 [2024-02-14 19:18:28.752131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:51.443 [2024-02-14 19:18:28.752164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:51.443 [2024-02-14 19:18:28.752177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.443 [2024-02-14 19:18:28.790474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.443 [2024-02-14 19:18:28.790552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.443 [2024-02-14 19:18:28.790572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.260 ms 00:17:51.443 [2024-02-14 19:18:28.790587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.443 [2024-02-14 19:18:28.790730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.443 [2024-02-14 19:18:28.790754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.443 [2024-02-14 19:18:28.790768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:51.443 [2024-02-14 19:18:28.790797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.443 [2024-02-14 19:18:28.791130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.443 [2024-02-14 19:18:28.791179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.443 [2024-02-14 19:18:28.791194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:17:51.443 [2024-02-14 19:18:28.791207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.443 [2024-02-14 19:18:28.791355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.443 [2024-02-14 19:18:28.791379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.443 [2024-02-14 19:18:28.791393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:17:51.443 [2024-02-14 19:18:28.791406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.443 [2024-02-14 19:18:28.815306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.443 [2024-02-14 19:18:28.815366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.443 [2024-02-14 19:18:28.815384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.862 ms 00:17:51.443 [2024-02-14 19:18:28.815397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.443 [2024-02-14 19:18:28.829702] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:51.443 [2024-02-14 19:18:28.845047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.443 [2024-02-14 19:18:28.845326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:51.443 [2024-02-14 19:18:28.845460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.368 ms 00:17:51.443 [2024-02-14 19:18:28.845600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.702 [2024-02-14 19:18:28.911980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.702 [2024-02-14 19:18:28.912260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:51.702 [2024-02-14 19:18:28.912387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.105 ms 00:17:51.702 [2024-02-14 19:18:28.912571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.702 [2024-02-14 19:18:28.912820] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:51.702 [2024-02-14 19:18:28.912856] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:54.234 [2024-02-14 19:18:31.111952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.234 [2024-02-14 19:18:31.112020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:54.234 [2024-02-14 19:18:31.112057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2199.148 ms 00:17:54.234 [2024-02-14 19:18:31.112072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.234 [2024-02-14 19:18:31.112361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.234 [2024-02-14 19:18:31.112388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.234 [2024-02-14 19:18:31.112408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:17:54.234 [2024-02-14 19:18:31.112420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.234 [2024-02-14 19:18:31.142370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.234 [2024-02-14 19:18:31.142408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:54.234 [2024-02-14 19:18:31.142429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.906 ms 00:17:54.234 [2024-02-14 19:18:31.142441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.234 [2024-02-14 19:18:31.172186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.234 [2024-02-14 19:18:31.172223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:54.234 [2024-02-14 19:18:31.172247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.612 ms 00:17:54.234 [2024-02-14 19:18:31.172258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.234 [2024-02-14 19:18:31.172743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.234 [2024-02-14 19:18:31.172770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.234 [2024-02-14 19:18:31.172787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:17:54.234 [2024-02-14 19:18:31.172801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.234 [2024-02-14 19:18:31.247731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.234 [2024-02-14 19:18:31.247778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:54.234 [2024-02-14 19:18:31.247803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.864 ms 00:17:54.234 [2024-02-14 19:18:31.247816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.234 [2024-02-14 19:18:31.278226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.234 [2024-02-14 19:18:31.278266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:54.234 [2024-02-14 19:18:31.278286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.317 ms 00:17:54.234 [2024-02-14 19:18:31.278297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.234 [2024-02-14 19:18:31.282225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.234 [2024-02-14 19:18:31.282263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:54.234 [2024-02-14 19:18:31.282283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.835 ms 00:17:54.234 [2024-02-14 19:18:31.282294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.234 [2024-02-14 19:18:31.312605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.234 [2024-02-14 19:18:31.312643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.234 [2024-02-14 19:18:31.312663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.266 ms 00:17:54.234 [2024-02-14 19:18:31.312675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.234 [2024-02-14 19:18:31.312775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.234 [2024-02-14 19:18:31.312800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.234 [2024-02-14 19:18:31.312817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.234 [2024-02-14 19:18:31.312828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.234 [2024-02-14 19:18:31.312955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.234 [2024-02-14 19:18:31.312975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.234 [2024-02-14 19:18:31.312992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:54.234 [2024-02-14 19:18:31.313002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.234 [2024-02-14 19:18:31.314048] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.234 [2024-02-14 19:18:31.318244] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2599.757 ms, result 0 00:17:54.234 [2024-02-14 19:18:31.319299] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.234 { 00:17:54.234 "name": "ftl0", 00:17:54.234 "uuid": "d840bd34-7af4-4abc-a654-d3a1fdd47b5d" 00:17:54.234 } 00:17:54.234 19:18:31 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:54.234 19:18:31 -- common/autotest_common.sh@885 -- # local bdev_name=ftl0 00:17:54.234 19:18:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:54.234 19:18:31 -- common/autotest_common.sh@887 -- # local i 00:17:54.234 19:18:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:54.234 19:18:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:54.234 19:18:31 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:54.234 19:18:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:54.493 [ 00:17:54.493 { 00:17:54.493 "name": "ftl0", 00:17:54.493 "aliases": [ 00:17:54.493 "d840bd34-7af4-4abc-a654-d3a1fdd47b5d" 00:17:54.493 ], 00:17:54.493 "product_name": "FTL disk", 00:17:54.493 "block_size": 4096, 00:17:54.493 "num_blocks": 23592960, 00:17:54.493 "uuid": "d840bd34-7af4-4abc-a654-d3a1fdd47b5d", 00:17:54.493 "assigned_rate_limits": { 00:17:54.493 "rw_ios_per_sec": 0, 00:17:54.493 "rw_mbytes_per_sec": 0, 00:17:54.493 "r_mbytes_per_sec": 0, 00:17:54.493 "w_mbytes_per_sec": 0 00:17:54.493 }, 00:17:54.493 "claimed": false, 00:17:54.493 "zoned": false, 00:17:54.493 "supported_io_types": { 00:17:54.493 "read": true, 00:17:54.493 "write": true, 00:17:54.493 "unmap": true, 00:17:54.493 "write_zeroes": true, 00:17:54.493 "flush": true, 00:17:54.493 "reset": false, 00:17:54.493 "compare": false, 00:17:54.493 "compare_and_write": false, 00:17:54.493 "abort": false, 00:17:54.493 "nvme_admin": false, 00:17:54.493 "nvme_io": false 00:17:54.493 }, 00:17:54.493 "driver_specific": { 00:17:54.493 "ftl": { 00:17:54.493 "base_bdev": "ed565204-c8cd-4cba-8013-ab9f31c85b17", 00:17:54.493 "cache": "nvc0n1p0" 00:17:54.493 } 00:17:54.493 } 00:17:54.493 } 00:17:54.493 ] 00:17:54.493 19:18:31 -- common/autotest_common.sh@893 -- # return 0 00:17:54.493 19:18:31 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:54.493 19:18:31 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:54.752 19:18:32 -- ftl/trim.sh@56 -- # echo ']}' 00:17:54.752 19:18:32 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:55.022 19:18:32 -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:55.022 { 00:17:55.022 "name": "ftl0", 00:17:55.022 "aliases": [ 00:17:55.022 "d840bd34-7af4-4abc-a654-d3a1fdd47b5d" 00:17:55.022 ], 00:17:55.022 "product_name": "FTL disk", 00:17:55.022 "block_size": 4096, 00:17:55.022 "num_blocks": 23592960, 00:17:55.022 "uuid": "d840bd34-7af4-4abc-a654-d3a1fdd47b5d", 00:17:55.022 "assigned_rate_limits": { 00:17:55.022 "rw_ios_per_sec": 0, 00:17:55.022 "rw_mbytes_per_sec": 0, 00:17:55.022 "r_mbytes_per_sec": 0, 00:17:55.022 "w_mbytes_per_sec": 0 00:17:55.022 }, 00:17:55.022 "claimed": false, 00:17:55.022 "zoned": false, 00:17:55.022 "supported_io_types": { 00:17:55.022 "read": true, 00:17:55.022 "write": true, 00:17:55.022 "unmap": true, 00:17:55.022 "write_zeroes": true, 00:17:55.022 "flush": true, 00:17:55.022 "reset": false, 00:17:55.022 "compare": false, 00:17:55.022 "compare_and_write": false, 00:17:55.022 "abort": false, 00:17:55.022 "nvme_admin": false, 00:17:55.022 "nvme_io": false 00:17:55.022 }, 00:17:55.022 "driver_specific": { 00:17:55.022 "ftl": { 00:17:55.022 "base_bdev": "ed565204-c8cd-4cba-8013-ab9f31c85b17", 00:17:55.022 "cache": "nvc0n1p0" 00:17:55.022 } 00:17:55.022 } 00:17:55.022 } 00:17:55.022 ]' 00:17:55.022 19:18:32 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:55.294 19:18:32 -- ftl/trim.sh@60 -- # nb=23592960 00:17:55.294 19:18:32 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:55.294 [2024-02-14 19:18:32.711178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.294 [2024-02-14 19:18:32.711257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:55.294 [2024-02-14 19:18:32.711278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:55.294 [2024-02-14 19:18:32.711309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.294 [2024-02-14 19:18:32.711356] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:55.554 [2024-02-14 19:18:32.714981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.554 [2024-02-14 19:18:32.715013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:55.554 [2024-02-14 19:18:32.715036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.598 ms 00:17:55.554 [2024-02-14 19:18:32.715047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.554 [2024-02-14 19:18:32.715793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.554 [2024-02-14 19:18:32.715819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:55.554 [2024-02-14 19:18:32.715837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:17:55.554 [2024-02-14 19:18:32.715849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.554 [2024-02-14 19:18:32.719645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.554 [2024-02-14 19:18:32.719836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:55.554 [2024-02-14 19:18:32.719979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.750 ms 00:17:55.554 [2024-02-14 19:18:32.720119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.554 [2024-02-14 19:18:32.727653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.554 [2024-02-14 19:18:32.727823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:55.554 [2024-02-14 19:18:32.727970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.408 ms 00:17:55.554 [2024-02-14 19:18:32.728030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.554 [2024-02-14 19:18:32.757721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.554 [2024-02-14 19:18:32.757884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:55.554 [2024-02-14 19:18:32.758043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.414 ms 00:17:55.554 [2024-02-14 19:18:32.758103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.555 [2024-02-14 19:18:32.776011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.555 [2024-02-14 19:18:32.776211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:55.555 [2024-02-14 19:18:32.776353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.677 ms 00:17:55.555 [2024-02-14 19:18:32.776414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.555 [2024-02-14 19:18:32.776931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.555 [2024-02-14 19:18:32.777118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:55.555 [2024-02-14 19:18:32.777252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:17:55.555 [2024-02-14 19:18:32.777367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.555 [2024-02-14 19:18:32.807410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.555 [2024-02-14 19:18:32.807622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:55.555 [2024-02-14 19:18:32.807756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.954 ms 00:17:55.555 [2024-02-14 19:18:32.807819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.555 [2024-02-14 19:18:32.838708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.555 [2024-02-14 19:18:32.838891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:55.555 [2024-02-14 19:18:32.839045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.657 ms 00:17:55.555 [2024-02-14 19:18:32.839069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.555 [2024-02-14 19:18:32.869211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.555 [2024-02-14 19:18:32.869253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:55.555 [2024-02-14 19:18:32.869290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.040 ms 00:17:55.555 [2024-02-14 19:18:32.869302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.555 [2024-02-14 19:18:32.900726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.555 [2024-02-14 19:18:32.900771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:55.555 [2024-02-14 19:18:32.900796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.233 ms 00:17:55.555 [2024-02-14 19:18:32.900808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.555 [2024-02-14 19:18:32.900956] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:55.555 [2024-02-14 19:18:32.900983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:55.555 [2024-02-14 19:18:32.901936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.901949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.901962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.901976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.901988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:55.556 [2024-02-14 19:18:32.902427] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:55.556 [2024-02-14 19:18:32.902442] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d840bd34-7af4-4abc-a654-d3a1fdd47b5d 00:17:55.556 [2024-02-14 19:18:32.902455] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:55.556 [2024-02-14 19:18:32.902468] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:55.556 [2024-02-14 19:18:32.902479] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:55.556 [2024-02-14 19:18:32.902506] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:55.556 [2024-02-14 19:18:32.902519] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:55.556 [2024-02-14 19:18:32.902533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:55.556 [2024-02-14 19:18:32.902544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:55.556 [2024-02-14 19:18:32.902559] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:55.556 [2024-02-14 19:18:32.902569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:55.556 [2024-02-14 19:18:32.902583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.556 [2024-02-14 19:18:32.902597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:55.556 [2024-02-14 19:18:32.902612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:17:55.556 [2024-02-14 19:18:32.902623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.556 [2024-02-14 19:18:32.919575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.556 [2024-02-14 19:18:32.919614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:55.556 [2024-02-14 19:18:32.919650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.908 ms 00:17:55.556 [2024-02-14 19:18:32.919662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.556 [2024-02-14 19:18:32.919942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.556 [2024-02-14 19:18:32.919966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:55.556 [2024-02-14 19:18:32.919983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:55.556 [2024-02-14 19:18:32.919994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:32.975858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.816 [2024-02-14 19:18:32.975914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:55.816 [2024-02-14 19:18:32.975953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.816 [2024-02-14 19:18:32.975966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:32.976116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.816 [2024-02-14 19:18:32.976135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:55.816 [2024-02-14 19:18:32.976150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.816 [2024-02-14 19:18:32.976162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:32.976268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.816 [2024-02-14 19:18:32.976287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:55.816 [2024-02-14 19:18:32.976301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.816 [2024-02-14 19:18:32.976313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:32.976371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.816 [2024-02-14 19:18:32.976387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:55.816 [2024-02-14 19:18:32.976400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.816 [2024-02-14 19:18:32.976411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:33.084722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.816 [2024-02-14 19:18:33.085000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:55.816 [2024-02-14 19:18:33.085133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.816 [2024-02-14 19:18:33.085260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:33.121810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.816 [2024-02-14 19:18:33.121998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:55.816 [2024-02-14 19:18:33.122046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.816 [2024-02-14 19:18:33.122060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:33.122145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.816 [2024-02-14 19:18:33.122165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.816 [2024-02-14 19:18:33.122180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.816 [2024-02-14 19:18:33.122191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:33.122255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.816 [2024-02-14 19:18:33.122269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.816 [2024-02-14 19:18:33.122286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.816 [2024-02-14 19:18:33.122298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:33.122442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.816 [2024-02-14 19:18:33.122462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.816 [2024-02-14 19:18:33.122571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.816 [2024-02-14 19:18:33.122602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:33.122710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.816 [2024-02-14 19:18:33.122731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:55.816 [2024-02-14 19:18:33.122749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.816 [2024-02-14 19:18:33.122760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:33.122825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.816 [2024-02-14 19:18:33.122841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.816 [2024-02-14 19:18:33.122855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.816 [2024-02-14 19:18:33.122867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:33.122936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.816 [2024-02-14 19:18:33.122971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.816 [2024-02-14 19:18:33.122991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.816 [2024-02-14 19:18:33.123003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.816 [2024-02-14 19:18:33.123222] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.013 ms, result 0 00:17:55.816 true 00:17:55.816 19:18:33 -- ftl/trim.sh@63 -- # killprocess 73408 00:17:55.816 19:18:33 -- common/autotest_common.sh@924 -- # '[' -z 73408 ']' 00:17:55.816 19:18:33 -- common/autotest_common.sh@928 -- # kill -0 73408 00:17:55.816 19:18:33 -- common/autotest_common.sh@929 -- # uname 00:17:55.816 19:18:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:55.816 19:18:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 73408 00:17:55.816 killing process with pid 73408 00:17:55.816 19:18:33 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:55.816 19:18:33 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:55.816 19:18:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 73408' 00:17:55.816 19:18:33 -- common/autotest_common.sh@943 -- # kill 73408 00:17:55.816 19:18:33 -- common/autotest_common.sh@948 -- # wait 73408 00:18:01.087 19:18:37 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:01.345 65536+0 records in 00:18:01.345 65536+0 records out 00:18:01.345 268435456 bytes (268 MB, 256 MiB) copied, 1.06535 s, 252 MB/s 00:18:01.345 19:18:38 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:01.346 [2024-02-14 19:18:38.708848] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:01.346 [2024-02-14 19:18:38.708962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73626 ] 00:18:01.604 [2024-02-14 19:18:38.865174] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.862 [2024-02-14 19:18:39.029012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.862 [2024-02-14 19:18:39.029112] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:18:02.122 [2024-02-14 19:18:39.308183] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:02.122 [2024-02-14 19:18:39.308272] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:02.122 [2024-02-14 19:18:39.460878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.122 [2024-02-14 19:18:39.460947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:02.122 [2024-02-14 19:18:39.460983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:02.122 [2024-02-14 19:18:39.460993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.122 [2024-02-14 19:18:39.464042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.122 [2024-02-14 19:18:39.464086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.122 [2024-02-14 19:18:39.464118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.023 ms 00:18:02.122 [2024-02-14 19:18:39.464129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.122 [2024-02-14 19:18:39.464242] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:02.122 [2024-02-14 19:18:39.465266] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:02.122 [2024-02-14 19:18:39.465321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.122 [2024-02-14 19:18:39.465352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.122 [2024-02-14 19:18:39.465363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:18:02.122 [2024-02-14 19:18:39.465373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.122 [2024-02-14 19:18:39.466808] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:02.122 [2024-02-14 19:18:39.481253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.122 [2024-02-14 19:18:39.481305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:02.122 [2024-02-14 19:18:39.481339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.446 ms 00:18:02.122 [2024-02-14 19:18:39.481350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.122 [2024-02-14 19:18:39.481461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.122 [2024-02-14 19:18:39.481481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:02.122 [2024-02-14 19:18:39.481530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:02.122 [2024-02-14 19:18:39.481541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.122 [2024-02-14 19:18:39.486249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.122 [2024-02-14 19:18:39.486287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.122 [2024-02-14 19:18:39.486318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.652 ms 00:18:02.122 [2024-02-14 19:18:39.486334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.122 [2024-02-14 19:18:39.486457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.122 [2024-02-14 19:18:39.486477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.122 [2024-02-14 19:18:39.486489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:02.122 [2024-02-14 19:18:39.486518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.122 [2024-02-14 19:18:39.486577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.122 [2024-02-14 19:18:39.486593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:02.122 [2024-02-14 19:18:39.486604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:02.122 [2024-02-14 19:18:39.486614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.122 [2024-02-14 19:18:39.486666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:02.122 [2024-02-14 19:18:39.490559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.122 [2024-02-14 19:18:39.490594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.122 [2024-02-14 19:18:39.490630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.902 ms 00:18:02.122 [2024-02-14 19:18:39.490640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.122 [2024-02-14 19:18:39.490702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.122 [2024-02-14 19:18:39.490720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:02.122 [2024-02-14 19:18:39.490731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:02.122 [2024-02-14 19:18:39.490741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.122 [2024-02-14 19:18:39.490764] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:02.122 [2024-02-14 19:18:39.490790] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:02.122 [2024-02-14 19:18:39.490825] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:02.122 [2024-02-14 19:18:39.490847] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:02.122 [2024-02-14 19:18:39.490918] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:02.122 [2024-02-14 19:18:39.490932] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:02.122 [2024-02-14 19:18:39.490944] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:02.122 [2024-02-14 19:18:39.490958] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:02.122 [2024-02-14 19:18:39.490969] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:02.122 [2024-02-14 19:18:39.490980] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:02.122 [2024-02-14 19:18:39.490989] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:02.122 [2024-02-14 19:18:39.490999] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:02.123 [2024-02-14 19:18:39.491013] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:02.123 [2024-02-14 19:18:39.491023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.123 [2024-02-14 19:18:39.491033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:02.123 [2024-02-14 19:18:39.491043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:18:02.123 [2024-02-14 19:18:39.491054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.123 [2024-02-14 19:18:39.491122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.123 [2024-02-14 19:18:39.491136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:02.123 [2024-02-14 19:18:39.491146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:02.123 [2024-02-14 19:18:39.491156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.123 [2024-02-14 19:18:39.491237] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:02.123 [2024-02-14 19:18:39.491254] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:02.123 [2024-02-14 19:18:39.491265] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:02.123 [2024-02-14 19:18:39.491275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.123 [2024-02-14 19:18:39.491285] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:02.123 [2024-02-14 19:18:39.491294] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:02.123 [2024-02-14 19:18:39.491303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:02.123 [2024-02-14 19:18:39.491314] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:02.123 [2024-02-14 19:18:39.491323] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:02.123 [2024-02-14 19:18:39.491332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:02.123 [2024-02-14 19:18:39.491341] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:02.123 [2024-02-14 19:18:39.491350] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:02.123 [2024-02-14 19:18:39.491359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:02.123 [2024-02-14 19:18:39.491371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:02.123 [2024-02-14 19:18:39.491380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:02.123 [2024-02-14 19:18:39.491389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.123 [2024-02-14 19:18:39.491398] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:02.123 [2024-02-14 19:18:39.491407] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:02.123 [2024-02-14 19:18:39.491428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.123 [2024-02-14 19:18:39.491438] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:02.123 [2024-02-14 19:18:39.491448] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:02.123 [2024-02-14 19:18:39.491457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:02.123 [2024-02-14 19:18:39.491466] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:02.123 [2024-02-14 19:18:39.491475] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:02.123 [2024-02-14 19:18:39.491484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:02.123 [2024-02-14 19:18:39.491493] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:02.123 [2024-02-14 19:18:39.491773] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:02.123 [2024-02-14 19:18:39.491831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:02.123 [2024-02-14 19:18:39.491872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:02.123 [2024-02-14 19:18:39.491932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:02.123 [2024-02-14 19:18:39.492073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:02.123 [2024-02-14 19:18:39.492138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:02.123 [2024-02-14 19:18:39.492196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:02.123 [2024-02-14 19:18:39.492255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:02.123 [2024-02-14 19:18:39.492319] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:02.123 [2024-02-14 19:18:39.492354] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:02.123 [2024-02-14 19:18:39.492387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:02.123 [2024-02-14 19:18:39.492421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:02.123 [2024-02-14 19:18:39.492455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:02.123 [2024-02-14 19:18:39.492634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:02.123 [2024-02-14 19:18:39.492687] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:02.123 [2024-02-14 19:18:39.492805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:02.123 [2024-02-14 19:18:39.492823] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:02.123 [2024-02-14 19:18:39.492835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.123 [2024-02-14 19:18:39.492846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:02.123 [2024-02-14 19:18:39.492861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:02.123 [2024-02-14 19:18:39.492872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:02.123 [2024-02-14 19:18:39.492883] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:02.123 [2024-02-14 19:18:39.492907] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:02.123 [2024-02-14 19:18:39.492918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:02.123 [2024-02-14 19:18:39.492929] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:02.123 [2024-02-14 19:18:39.492943] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:02.123 [2024-02-14 19:18:39.492960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:02.123 [2024-02-14 19:18:39.492971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:02.123 [2024-02-14 19:18:39.492982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:02.123 [2024-02-14 19:18:39.492993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:02.123 [2024-02-14 19:18:39.493003] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:02.123 [2024-02-14 19:18:39.493014] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:02.123 [2024-02-14 19:18:39.493025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:02.123 [2024-02-14 19:18:39.493035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:02.123 [2024-02-14 19:18:39.493046] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:02.123 [2024-02-14 19:18:39.493057] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:02.123 [2024-02-14 19:18:39.493067] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:02.123 [2024-02-14 19:18:39.493078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:02.123 [2024-02-14 19:18:39.493089] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:02.123 [2024-02-14 19:18:39.493100] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:02.123 [2024-02-14 19:18:39.493111] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:02.123 [2024-02-14 19:18:39.493123] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:02.123 [2024-02-14 19:18:39.493134] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:02.123 [2024-02-14 19:18:39.493145] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:02.123 [2024-02-14 19:18:39.493156] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:02.123 [2024-02-14 19:18:39.493169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.123 [2024-02-14 19:18:39.493181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:02.123 [2024-02-14 19:18:39.493205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.970 ms 00:18:02.123 [2024-02-14 19:18:39.493217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.123 [2024-02-14 19:18:39.509871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.123 [2024-02-14 19:18:39.510075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:02.123 [2024-02-14 19:18:39.510219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.564 ms 00:18:02.123 [2024-02-14 19:18:39.510271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.123 [2024-02-14 19:18:39.510446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.123 [2024-02-14 19:18:39.510541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:02.123 [2024-02-14 19:18:39.510678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:02.123 [2024-02-14 19:18:39.510732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.555789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.556047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:02.383 [2024-02-14 19:18:39.556193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.981 ms 00:18:02.383 [2024-02-14 19:18:39.556244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.556453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.556627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:02.383 [2024-02-14 19:18:39.556748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:02.383 [2024-02-14 19:18:39.556865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.557277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.557456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:02.383 [2024-02-14 19:18:39.557594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:18:02.383 [2024-02-14 19:18:39.557679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.557935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.558101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:02.383 [2024-02-14 19:18:39.558210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:02.383 [2024-02-14 19:18:39.558269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.574466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.574686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:02.383 [2024-02-14 19:18:39.574806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.065 ms 00:18:02.383 [2024-02-14 19:18:39.574856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.589591] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:02.383 [2024-02-14 19:18:39.589863] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:02.383 [2024-02-14 19:18:39.590020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.590247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:02.383 [2024-02-14 19:18:39.590272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.901 ms 00:18:02.383 [2024-02-14 19:18:39.590285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.619196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.619269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:02.383 [2024-02-14 19:18:39.619315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.814 ms 00:18:02.383 [2024-02-14 19:18:39.619327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.634767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.634805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:02.383 [2024-02-14 19:18:39.634837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.352 ms 00:18:02.383 [2024-02-14 19:18:39.634848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.649574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.649628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:02.383 [2024-02-14 19:18:39.649700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.626 ms 00:18:02.383 [2024-02-14 19:18:39.649711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.650226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.650252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:02.383 [2024-02-14 19:18:39.650266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:18:02.383 [2024-02-14 19:18:39.650276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.724607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.724673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:02.383 [2024-02-14 19:18:39.724710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.240 ms 00:18:02.383 [2024-02-14 19:18:39.724728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.736758] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:02.383 [2024-02-14 19:18:39.750207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.750268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:02.383 [2024-02-14 19:18:39.750303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.336 ms 00:18:02.383 [2024-02-14 19:18:39.750315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.750443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.750465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:02.383 [2024-02-14 19:18:39.750478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:02.383 [2024-02-14 19:18:39.750489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.750619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.750637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:02.383 [2024-02-14 19:18:39.750666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:02.383 [2024-02-14 19:18:39.750677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.752895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.752934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:02.383 [2024-02-14 19:18:39.752966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.187 ms 00:18:02.383 [2024-02-14 19:18:39.753008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.753051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.753072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:02.383 [2024-02-14 19:18:39.753085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:02.383 [2024-02-14 19:18:39.753095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.753136] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:02.383 [2024-02-14 19:18:39.753152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.753163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:02.383 [2024-02-14 19:18:39.753175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:02.383 [2024-02-14 19:18:39.753186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.782179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.782392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:02.383 [2024-02-14 19:18:39.782558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.962 ms 00:18:02.383 [2024-02-14 19:18:39.782612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.782922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.383 [2024-02-14 19:18:39.783066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:02.383 [2024-02-14 19:18:39.783183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:02.383 [2024-02-14 19:18:39.783206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.383 [2024-02-14 19:18:39.784292] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:02.383 [2024-02-14 19:18:39.788326] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 323.031 ms, result 0 00:18:02.383 [2024-02-14 19:18:39.789337] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:02.642 [2024-02-14 19:18:39.805798] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:13.740  Copying: 22/256 [MB] (22 MBps) Copying: 45/256 [MB] (23 MBps) Copying: 68/256 [MB] (23 MBps) Copying: 91/256 [MB] (22 MBps) Copying: 114/256 [MB] (22 MBps) Copying: 137/256 [MB] (23 MBps) Copying: 161/256 [MB] (23 MBps) Copying: 184/256 [MB] (23 MBps) Copying: 207/256 [MB] (23 MBps) Copying: 230/256 [MB] (22 MBps) Copying: 253/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-02-14 19:18:50.934849] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:18:13.740 [2024-02-14 19:18:50.935017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:13.740 [2024-02-14 19:18:50.946746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.740 [2024-02-14 19:18:50.946786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:13.740 [2024-02-14 19:18:50.946820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:13.740 [2024-02-14 19:18:50.946832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.740 [2024-02-14 19:18:50.946862] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:13.740 [2024-02-14 19:18:50.950131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.740 [2024-02-14 19:18:50.950167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:13.740 [2024-02-14 19:18:50.950198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.247 ms 00:18:13.740 [2024-02-14 19:18:50.950209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.740 [2024-02-14 19:18:50.951970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.740 [2024-02-14 19:18:50.952016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:13.740 [2024-02-14 19:18:50.952048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.697 ms 00:18:13.740 [2024-02-14 19:18:50.952060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.740 [2024-02-14 19:18:50.959252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.740 [2024-02-14 19:18:50.959315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:13.740 [2024-02-14 19:18:50.959348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.162 ms 00:18:13.740 [2024-02-14 19:18:50.959359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.740 [2024-02-14 19:18:50.966511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.740 [2024-02-14 19:18:50.966580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:13.740 [2024-02-14 19:18:50.966625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.045 ms 00:18:13.740 [2024-02-14 19:18:50.966640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.740 [2024-02-14 19:18:50.997774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.740 [2024-02-14 19:18:50.997822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:13.740 [2024-02-14 19:18:50.997858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.043 ms 00:18:13.740 [2024-02-14 19:18:50.997869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.740 [2024-02-14 19:18:51.015049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.740 [2024-02-14 19:18:51.015091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:13.740 [2024-02-14 19:18:51.015126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.089 ms 00:18:13.740 [2024-02-14 19:18:51.015137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.740 [2024-02-14 19:18:51.015331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.740 [2024-02-14 19:18:51.015351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:13.740 [2024-02-14 19:18:51.015364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:18:13.740 [2024-02-14 19:18:51.015375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.740 [2024-02-14 19:18:51.045525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.740 [2024-02-14 19:18:51.045580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:13.740 [2024-02-14 19:18:51.045614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.120 ms 00:18:13.740 [2024-02-14 19:18:51.045625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.740 [2024-02-14 19:18:51.074699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.740 [2024-02-14 19:18:51.074739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:13.740 [2024-02-14 19:18:51.074771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.964 ms 00:18:13.740 [2024-02-14 19:18:51.074792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.740 [2024-02-14 19:18:51.102409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.740 [2024-02-14 19:18:51.102447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:13.740 [2024-02-14 19:18:51.102479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.510 ms 00:18:13.740 [2024-02-14 19:18:51.102489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.740 [2024-02-14 19:18:51.130545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.740 [2024-02-14 19:18:51.130613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:13.740 [2024-02-14 19:18:51.130647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.885 ms 00:18:13.740 [2024-02-14 19:18:51.130657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.740 [2024-02-14 19:18:51.130736] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:13.740 [2024-02-14 19:18:51.130761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.130994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.131005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.131016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.131027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.131037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.131048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.131059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.131070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.131082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.131093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.131104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.131115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:13.740 [2024-02-14 19:18:51.131126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.131908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.132168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.132448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.132667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.132871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.132902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.132926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.132948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.132971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.132994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:13.741 [2024-02-14 19:18:51.133339] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:13.741 [2024-02-14 19:18:51.133351] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d840bd34-7af4-4abc-a654-d3a1fdd47b5d 00:18:13.741 [2024-02-14 19:18:51.133362] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:13.741 [2024-02-14 19:18:51.133372] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:13.741 [2024-02-14 19:18:51.133388] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:13.741 [2024-02-14 19:18:51.133399] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:13.741 [2024-02-14 19:18:51.133409] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:13.741 [2024-02-14 19:18:51.133420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:13.741 [2024-02-14 19:18:51.133430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:13.741 [2024-02-14 19:18:51.133440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:13.741 [2024-02-14 19:18:51.133449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:13.741 [2024-02-14 19:18:51.133460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.741 [2024-02-14 19:18:51.133472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:13.741 [2024-02-14 19:18:51.133484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.726 ms 00:18:13.741 [2024-02-14 19:18:51.133513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.741 [2024-02-14 19:18:51.148567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.741 [2024-02-14 19:18:51.148608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:13.741 [2024-02-14 19:18:51.148641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.021 ms 00:18:13.741 [2024-02-14 19:18:51.148651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.741 [2024-02-14 19:18:51.148920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.741 [2024-02-14 19:18:51.148938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:13.741 [2024-02-14 19:18:51.148949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:18:13.741 [2024-02-14 19:18:51.148960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.194526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.001 [2024-02-14 19:18:51.194624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:14.001 [2024-02-14 19:18:51.194661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.001 [2024-02-14 19:18:51.194673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.194847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.001 [2024-02-14 19:18:51.194864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:14.001 [2024-02-14 19:18:51.194876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.001 [2024-02-14 19:18:51.194887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.194957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.001 [2024-02-14 19:18:51.194981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:14.001 [2024-02-14 19:18:51.194993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.001 [2024-02-14 19:18:51.195004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.195029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.001 [2024-02-14 19:18:51.195043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:14.001 [2024-02-14 19:18:51.195071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.001 [2024-02-14 19:18:51.195114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.288345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.001 [2024-02-14 19:18:51.288411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:14.001 [2024-02-14 19:18:51.288446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.001 [2024-02-14 19:18:51.288457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.322608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.001 [2024-02-14 19:18:51.322648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:14.001 [2024-02-14 19:18:51.322680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.001 [2024-02-14 19:18:51.322691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.322765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.001 [2024-02-14 19:18:51.322782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:14.001 [2024-02-14 19:18:51.322810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.001 [2024-02-14 19:18:51.322820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.322854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.001 [2024-02-14 19:18:51.322867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:14.001 [2024-02-14 19:18:51.322878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.001 [2024-02-14 19:18:51.322887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.322993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.001 [2024-02-14 19:18:51.323010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:14.001 [2024-02-14 19:18:51.323022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.001 [2024-02-14 19:18:51.323045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.323093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.001 [2024-02-14 19:18:51.323110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:14.001 [2024-02-14 19:18:51.323120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.001 [2024-02-14 19:18:51.323131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.323173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.001 [2024-02-14 19:18:51.323187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:14.001 [2024-02-14 19:18:51.323198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.001 [2024-02-14 19:18:51.323224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.323277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.001 [2024-02-14 19:18:51.323292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:14.001 [2024-02-14 19:18:51.323303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.001 [2024-02-14 19:18:51.323313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.001 [2024-02-14 19:18:51.323479] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.734 ms, result 0 00:18:15.377 00:18:15.377 00:18:15.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.377 19:18:52 -- ftl/trim.sh@72 -- # svcpid=73773 00:18:15.377 19:18:52 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:15.377 19:18:52 -- ftl/trim.sh@73 -- # waitforlisten 73773 00:18:15.377 19:18:52 -- common/autotest_common.sh@817 -- # '[' -z 73773 ']' 00:18:15.377 19:18:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.377 19:18:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:15.377 19:18:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.377 19:18:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:15.377 19:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:15.377 [2024-02-14 19:18:52.479830] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:15.377 [2024-02-14 19:18:52.479980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73773 ] 00:18:15.377 [2024-02-14 19:18:52.637491] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.636 [2024-02-14 19:18:52.803282] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:15.636 [2024-02-14 19:18:52.803536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.011 19:18:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:17.011 19:18:54 -- common/autotest_common.sh@850 -- # return 0 00:18:17.011 19:18:54 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:17.011 [2024-02-14 19:18:54.263582] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:17.011 [2024-02-14 19:18:54.263663] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:17.270 [2024-02-14 19:18:54.436476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.270 [2024-02-14 19:18:54.436567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:17.270 [2024-02-14 19:18:54.436595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:17.270 [2024-02-14 19:18:54.436609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.270 [2024-02-14 19:18:54.440711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.270 [2024-02-14 19:18:54.440756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.270 [2024-02-14 19:18:54.440797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.069 ms 00:18:17.270 [2024-02-14 19:18:54.440811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.270 [2024-02-14 19:18:54.440998] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:17.270 [2024-02-14 19:18:54.442131] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:17.270 [2024-02-14 19:18:54.442195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.270 [2024-02-14 19:18:54.442211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.270 [2024-02-14 19:18:54.442244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:18:17.270 [2024-02-14 19:18:54.442257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.270 [2024-02-14 19:18:54.443703] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:17.270 [2024-02-14 19:18:54.459595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.270 [2024-02-14 19:18:54.459673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:17.270 [2024-02-14 19:18:54.459695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.906 ms 00:18:17.270 [2024-02-14 19:18:54.459713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.270 [2024-02-14 19:18:54.459827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.270 [2024-02-14 19:18:54.459855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:17.270 [2024-02-14 19:18:54.459869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:17.270 [2024-02-14 19:18:54.459885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.270 [2024-02-14 19:18:54.464388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.270 [2024-02-14 19:18:54.464454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.270 [2024-02-14 19:18:54.464488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.437 ms 00:18:17.270 [2024-02-14 19:18:54.464535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.270 [2024-02-14 19:18:54.464724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.270 [2024-02-14 19:18:54.464753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.270 [2024-02-14 19:18:54.464768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:18:17.270 [2024-02-14 19:18:54.464786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.270 [2024-02-14 19:18:54.464854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.270 [2024-02-14 19:18:54.464908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:17.270 [2024-02-14 19:18:54.464932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:17.270 [2024-02-14 19:18:54.464952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.270 [2024-02-14 19:18:54.465001] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:17.270 [2024-02-14 19:18:54.469111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.271 [2024-02-14 19:18:54.469148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.271 [2024-02-14 19:18:54.469187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.116 ms 00:18:17.271 [2024-02-14 19:18:54.469200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.271 [2024-02-14 19:18:54.469281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.271 [2024-02-14 19:18:54.469300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:17.271 [2024-02-14 19:18:54.469318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:17.271 [2024-02-14 19:18:54.469329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.271 [2024-02-14 19:18:54.469371] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:17.271 [2024-02-14 19:18:54.469403] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:17.271 [2024-02-14 19:18:54.469447] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:17.271 [2024-02-14 19:18:54.469467] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:17.271 [2024-02-14 19:18:54.469582] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:17.271 [2024-02-14 19:18:54.469602] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:17.271 [2024-02-14 19:18:54.469649] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:17.271 [2024-02-14 19:18:54.469688] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:17.271 [2024-02-14 19:18:54.469709] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:17.271 [2024-02-14 19:18:54.469723] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:17.271 [2024-02-14 19:18:54.469740] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:17.271 [2024-02-14 19:18:54.469752] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:17.271 [2024-02-14 19:18:54.469773] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:17.271 [2024-02-14 19:18:54.469787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.271 [2024-02-14 19:18:54.469803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:17.271 [2024-02-14 19:18:54.469817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:18:17.271 [2024-02-14 19:18:54.469833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.271 [2024-02-14 19:18:54.469951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.271 [2024-02-14 19:18:54.469998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:17.271 [2024-02-14 19:18:54.470026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:17.271 [2024-02-14 19:18:54.470059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.271 [2024-02-14 19:18:54.470185] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:17.271 [2024-02-14 19:18:54.470245] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:17.271 [2024-02-14 19:18:54.470267] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:17.271 [2024-02-14 19:18:54.470298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.271 [2024-02-14 19:18:54.470323] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:17.271 [2024-02-14 19:18:54.470355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:17.271 [2024-02-14 19:18:54.470380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:17.271 [2024-02-14 19:18:54.470416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:17.271 [2024-02-14 19:18:54.470442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:17.271 [2024-02-14 19:18:54.470475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:17.271 [2024-02-14 19:18:54.470548] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:17.271 [2024-02-14 19:18:54.470574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:17.271 [2024-02-14 19:18:54.470597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:17.271 [2024-02-14 19:18:54.470630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:17.271 [2024-02-14 19:18:54.470658] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:17.271 [2024-02-14 19:18:54.470691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.271 [2024-02-14 19:18:54.470716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:17.271 [2024-02-14 19:18:54.470744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:17.271 [2024-02-14 19:18:54.470769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.271 [2024-02-14 19:18:54.470802] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:17.271 [2024-02-14 19:18:54.470826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:17.271 [2024-02-14 19:18:54.470855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:17.271 [2024-02-14 19:18:54.470878] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:17.271 [2024-02-14 19:18:54.470931] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:17.271 [2024-02-14 19:18:54.470968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:17.271 [2024-02-14 19:18:54.471002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:17.271 [2024-02-14 19:18:54.471027] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:17.271 [2024-02-14 19:18:54.471059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:17.271 [2024-02-14 19:18:54.471107] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:17.271 [2024-02-14 19:18:54.471142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:17.271 [2024-02-14 19:18:54.471168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:17.271 [2024-02-14 19:18:54.471199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:17.271 [2024-02-14 19:18:54.471214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:17.271 [2024-02-14 19:18:54.471233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:17.271 [2024-02-14 19:18:54.471261] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:17.271 [2024-02-14 19:18:54.471277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:17.271 [2024-02-14 19:18:54.471289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:17.271 [2024-02-14 19:18:54.471304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:17.271 [2024-02-14 19:18:54.471316] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:17.271 [2024-02-14 19:18:54.471336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:17.271 [2024-02-14 19:18:54.471348] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:17.271 [2024-02-14 19:18:54.471365] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:17.271 [2024-02-14 19:18:54.471381] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:17.271 [2024-02-14 19:18:54.471398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.271 [2024-02-14 19:18:54.471410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:17.271 [2024-02-14 19:18:54.471427] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:17.271 [2024-02-14 19:18:54.471439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:17.271 [2024-02-14 19:18:54.471455] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:17.271 [2024-02-14 19:18:54.471467] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:17.271 [2024-02-14 19:18:54.471510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:17.271 [2024-02-14 19:18:54.471543] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:17.271 [2024-02-14 19:18:54.471561] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:17.271 [2024-02-14 19:18:54.471574] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:17.271 [2024-02-14 19:18:54.471588] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:17.271 [2024-02-14 19:18:54.471599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:17.271 [2024-02-14 19:18:54.471615] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:17.271 [2024-02-14 19:18:54.471626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:17.271 [2024-02-14 19:18:54.471642] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:17.271 [2024-02-14 19:18:54.471654] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:17.271 [2024-02-14 19:18:54.471667] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:17.271 [2024-02-14 19:18:54.471678] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:17.271 [2024-02-14 19:18:54.471692] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:17.271 [2024-02-14 19:18:54.471704] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:17.271 [2024-02-14 19:18:54.471718] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:17.271 [2024-02-14 19:18:54.471731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:17.271 [2024-02-14 19:18:54.471744] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:17.271 [2024-02-14 19:18:54.471756] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:17.271 [2024-02-14 19:18:54.471771] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:17.272 [2024-02-14 19:18:54.471782] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:17.272 [2024-02-14 19:18:54.471796] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:17.272 [2024-02-14 19:18:54.471807] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:17.272 [2024-02-14 19:18:54.471824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.471836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:17.272 [2024-02-14 19:18:54.471850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:18:17.272 [2024-02-14 19:18:54.471877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.272 [2024-02-14 19:18:54.490192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.490401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.272 [2024-02-14 19:18:54.490650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.247 ms 00:18:17.272 [2024-02-14 19:18:54.490834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.272 [2024-02-14 19:18:54.491117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.491290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:17.272 [2024-02-14 19:18:54.491505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:17.272 [2024-02-14 19:18:54.491730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.272 [2024-02-14 19:18:54.530813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.531008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.272 [2024-02-14 19:18:54.531199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.867 ms 00:18:17.272 [2024-02-14 19:18:54.531359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.272 [2024-02-14 19:18:54.531659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.531811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.272 [2024-02-14 19:18:54.532003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:17.272 [2024-02-14 19:18:54.532172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.272 [2024-02-14 19:18:54.532760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.532948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.272 [2024-02-14 19:18:54.533146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:18:17.272 [2024-02-14 19:18:54.533337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.272 [2024-02-14 19:18:54.533728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.533887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.272 [2024-02-14 19:18:54.534079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:18:17.272 [2024-02-14 19:18:54.534277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.272 [2024-02-14 19:18:54.552406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.552620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.272 [2024-02-14 19:18:54.552813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.922 ms 00:18:17.272 [2024-02-14 19:18:54.552986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.272 [2024-02-14 19:18:54.568717] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:17.272 [2024-02-14 19:18:54.568959] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:17.272 [2024-02-14 19:18:54.569133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.569168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:17.272 [2024-02-14 19:18:54.569204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.642 ms 00:18:17.272 [2024-02-14 19:18:54.569228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.272 [2024-02-14 19:18:54.596109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.596152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:17.272 [2024-02-14 19:18:54.596192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.751 ms 00:18:17.272 [2024-02-14 19:18:54.596204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.272 [2024-02-14 19:18:54.610263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.610300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:17.272 [2024-02-14 19:18:54.610339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.969 ms 00:18:17.272 [2024-02-14 19:18:54.610350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.272 [2024-02-14 19:18:54.624189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.624243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:17.272 [2024-02-14 19:18:54.624285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.730 ms 00:18:17.272 [2024-02-14 19:18:54.624297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.272 [2024-02-14 19:18:54.624844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.272 [2024-02-14 19:18:54.624885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:17.272 [2024-02-14 19:18:54.624921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:18:17.272 [2024-02-14 19:18:54.624961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.529 [2024-02-14 19:18:54.697838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.529 [2024-02-14 19:18:54.697902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:17.529 [2024-02-14 19:18:54.697946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.822 ms 00:18:17.529 [2024-02-14 19:18:54.697959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.529 [2024-02-14 19:18:54.709176] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:17.529 [2024-02-14 19:18:54.721771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.529 [2024-02-14 19:18:54.721847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:17.529 [2024-02-14 19:18:54.721868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.654 ms 00:18:17.529 [2024-02-14 19:18:54.721883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.529 [2024-02-14 19:18:54.722015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.529 [2024-02-14 19:18:54.722041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:17.529 [2024-02-14 19:18:54.722055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:17.529 [2024-02-14 19:18:54.722085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.529 [2024-02-14 19:18:54.722140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.529 [2024-02-14 19:18:54.722158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:17.529 [2024-02-14 19:18:54.722170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:17.529 [2024-02-14 19:18:54.722183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.529 [2024-02-14 19:18:54.725097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.529 [2024-02-14 19:18:54.725159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:17.529 [2024-02-14 19:18:54.725175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.887 ms 00:18:17.529 [2024-02-14 19:18:54.725191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.529 [2024-02-14 19:18:54.725241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.529 [2024-02-14 19:18:54.725262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:17.529 [2024-02-14 19:18:54.725275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:17.529 [2024-02-14 19:18:54.725290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.529 [2024-02-14 19:18:54.725341] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:17.529 [2024-02-14 19:18:54.725367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.529 [2024-02-14 19:18:54.725379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:17.529 [2024-02-14 19:18:54.725394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:17.529 [2024-02-14 19:18:54.725406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.529 [2024-02-14 19:18:54.754091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.529 [2024-02-14 19:18:54.754138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:17.529 [2024-02-14 19:18:54.754182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.645 ms 00:18:17.529 [2024-02-14 19:18:54.754196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.529 [2024-02-14 19:18:54.754344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.529 [2024-02-14 19:18:54.754365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:17.529 [2024-02-14 19:18:54.754386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:17.529 [2024-02-14 19:18:54.754398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.529 [2024-02-14 19:18:54.755678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:17.529 [2024-02-14 19:18:54.759742] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 318.664 ms, result 0 00:18:17.529 [2024-02-14 19:18:54.760965] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:17.529 Some configs were skipped because the RPC state that can call them passed over. 00:18:17.529 19:18:54 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:17.787 [2024-02-14 19:18:55.117048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.787 [2024-02-14 19:18:55.117329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:17.787 [2024-02-14 19:18:55.117464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.284 ms 00:18:17.787 [2024-02-14 19:18:55.117629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.787 [2024-02-14 19:18:55.117765] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 28.998 ms, result 0 00:18:17.787 true 00:18:17.787 19:18:55 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:18.045 [2024-02-14 19:18:55.355702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.045 [2024-02-14 19:18:55.355971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:18.045 [2024-02-14 19:18:55.356135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.288 ms 00:18:18.045 [2024-02-14 19:18:55.356298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.045 [2024-02-14 19:18:55.356422] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 31.000 ms, result 0 00:18:18.045 true 00:18:18.045 19:18:55 -- ftl/trim.sh@81 -- # killprocess 73773 00:18:18.045 19:18:55 -- common/autotest_common.sh@924 -- # '[' -z 73773 ']' 00:18:18.045 19:18:55 -- common/autotest_common.sh@928 -- # kill -0 73773 00:18:18.045 19:18:55 -- common/autotest_common.sh@929 -- # uname 00:18:18.045 19:18:55 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:18.045 19:18:55 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 73773 00:18:18.045 killing process with pid 73773 00:18:18.045 19:18:55 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:18.045 19:18:55 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:18.045 19:18:55 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 73773' 00:18:18.045 19:18:55 -- common/autotest_common.sh@943 -- # kill 73773 00:18:18.045 19:18:55 -- common/autotest_common.sh@948 -- # wait 73773 00:18:18.977 [2024-02-14 19:18:56.242079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.977 [2024-02-14 19:18:56.242165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:18.977 [2024-02-14 19:18:56.242185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:18.977 [2024-02-14 19:18:56.242200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.977 [2024-02-14 19:18:56.242232] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:18.977 [2024-02-14 19:18:56.245352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.977 [2024-02-14 19:18:56.245383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:18.977 [2024-02-14 19:18:56.245418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.096 ms 00:18:18.977 [2024-02-14 19:18:56.245430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.978 [2024-02-14 19:18:56.245790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.978 [2024-02-14 19:18:56.245811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:18.978 [2024-02-14 19:18:56.245827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:18:18.978 [2024-02-14 19:18:56.245839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.978 [2024-02-14 19:18:56.249771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.978 [2024-02-14 19:18:56.249815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:18.978 [2024-02-14 19:18:56.249836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.902 ms 00:18:18.978 [2024-02-14 19:18:56.249849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.978 [2024-02-14 19:18:56.256841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.978 [2024-02-14 19:18:56.256876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:18.978 [2024-02-14 19:18:56.256895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.939 ms 00:18:18.978 [2024-02-14 19:18:56.256907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.978 [2024-02-14 19:18:56.268695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.978 [2024-02-14 19:18:56.268730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:18.978 [2024-02-14 19:18:56.268767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.717 ms 00:18:18.978 [2024-02-14 19:18:56.268778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.978 [2024-02-14 19:18:56.276935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.978 [2024-02-14 19:18:56.276976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:18.978 [2024-02-14 19:18:56.277010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.110 ms 00:18:18.978 [2024-02-14 19:18:56.277022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.978 [2024-02-14 19:18:56.277163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.978 [2024-02-14 19:18:56.277182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:18.978 [2024-02-14 19:18:56.277197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:18.978 [2024-02-14 19:18:56.277208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.978 [2024-02-14 19:18:56.289096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.978 [2024-02-14 19:18:56.289130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:18.978 [2024-02-14 19:18:56.289170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.861 ms 00:18:18.978 [2024-02-14 19:18:56.289182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.978 [2024-02-14 19:18:56.300762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.978 [2024-02-14 19:18:56.300796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:18.978 [2024-02-14 19:18:56.300837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.529 ms 00:18:18.978 [2024-02-14 19:18:56.300849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.978 [2024-02-14 19:18:56.311988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.978 [2024-02-14 19:18:56.312022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:18.978 [2024-02-14 19:18:56.312060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.085 ms 00:18:18.978 [2024-02-14 19:18:56.312072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.978 [2024-02-14 19:18:56.323688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.978 [2024-02-14 19:18:56.323726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:18.978 [2024-02-14 19:18:56.323765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.533 ms 00:18:18.978 [2024-02-14 19:18:56.323777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.978 [2024-02-14 19:18:56.323828] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:18.978 [2024-02-14 19:18:56.323867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.323911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.323940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.323975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.323989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:18.978 [2024-02-14 19:18:56.324709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.324999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:18.979 [2024-02-14 19:18:56.325462] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:18.979 [2024-02-14 19:18:56.325511] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d840bd34-7af4-4abc-a654-d3a1fdd47b5d 00:18:18.979 [2024-02-14 19:18:56.325528] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:18.979 [2024-02-14 19:18:56.325546] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:18.979 [2024-02-14 19:18:56.325558] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:18.979 [2024-02-14 19:18:56.325575] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:18.979 [2024-02-14 19:18:56.325588] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:18.979 [2024-02-14 19:18:56.325606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:18.979 [2024-02-14 19:18:56.325618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:18.979 [2024-02-14 19:18:56.325634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:18.979 [2024-02-14 19:18:56.325656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:18.979 [2024-02-14 19:18:56.325675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.979 [2024-02-14 19:18:56.325688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:18.979 [2024-02-14 19:18:56.325707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.850 ms 00:18:18.979 [2024-02-14 19:18:56.325724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.979 [2024-02-14 19:18:56.342630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.979 [2024-02-14 19:18:56.342670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:18.979 [2024-02-14 19:18:56.342701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.845 ms 00:18:18.979 [2024-02-14 19:18:56.342715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.979 [2024-02-14 19:18:56.343030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.979 [2024-02-14 19:18:56.343052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:18.979 [2024-02-14 19:18:56.343078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:18:18.979 [2024-02-14 19:18:56.343089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.395206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.238 [2024-02-14 19:18:56.395252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:19.238 [2024-02-14 19:18:56.395286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.238 [2024-02-14 19:18:56.395298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.395396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.238 [2024-02-14 19:18:56.395414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:19.238 [2024-02-14 19:18:56.395432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.238 [2024-02-14 19:18:56.395443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.395584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.238 [2024-02-14 19:18:56.395604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:19.238 [2024-02-14 19:18:56.395623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.238 [2024-02-14 19:18:56.395635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.395696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.238 [2024-02-14 19:18:56.395710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:19.238 [2024-02-14 19:18:56.395724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.238 [2024-02-14 19:18:56.395738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.486351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.238 [2024-02-14 19:18:56.486410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:19.238 [2024-02-14 19:18:56.486448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.238 [2024-02-14 19:18:56.486460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.520483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.238 [2024-02-14 19:18:56.520547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:19.238 [2024-02-14 19:18:56.520587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.238 [2024-02-14 19:18:56.520598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.520675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.238 [2024-02-14 19:18:56.520694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:19.238 [2024-02-14 19:18:56.520711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.238 [2024-02-14 19:18:56.520723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.520761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.238 [2024-02-14 19:18:56.520774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:19.238 [2024-02-14 19:18:56.520788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.238 [2024-02-14 19:18:56.520799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.520956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.238 [2024-02-14 19:18:56.520976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:19.238 [2024-02-14 19:18:56.520991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.238 [2024-02-14 19:18:56.521003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.521064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.238 [2024-02-14 19:18:56.521082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:19.238 [2024-02-14 19:18:56.521098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.238 [2024-02-14 19:18:56.521109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.521161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.238 [2024-02-14 19:18:56.521177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:19.238 [2024-02-14 19:18:56.521194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.238 [2024-02-14 19:18:56.521206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.521266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.238 [2024-02-14 19:18:56.521283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:19.238 [2024-02-14 19:18:56.521298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.238 [2024-02-14 19:18:56.521310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.238 [2024-02-14 19:18:56.521475] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 279.387 ms, result 0 00:18:20.172 19:18:57 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:20.172 19:18:57 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:20.430 [2024-02-14 19:18:57.602384] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:20.430 [2024-02-14 19:18:57.602569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73839 ] 00:18:20.430 [2024-02-14 19:18:57.771405] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.688 [2024-02-14 19:18:57.937464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.688 [2024-02-14 19:18:57.937589] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:18:20.946 [2024-02-14 19:18:58.219328] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:20.946 [2024-02-14 19:18:58.219421] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:21.206 [2024-02-14 19:18:58.371987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.206 [2024-02-14 19:18:58.372037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:21.206 [2024-02-14 19:18:58.372088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:21.206 [2024-02-14 19:18:58.372099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.206 [2024-02-14 19:18:58.375436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.206 [2024-02-14 19:18:58.375481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:21.206 [2024-02-14 19:18:58.375551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.293 ms 00:18:21.206 [2024-02-14 19:18:58.375563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.206 [2024-02-14 19:18:58.375730] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:21.206 [2024-02-14 19:18:58.376756] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:21.206 [2024-02-14 19:18:58.376816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.206 [2024-02-14 19:18:58.376831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:21.206 [2024-02-14 19:18:58.376844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:18:21.206 [2024-02-14 19:18:58.376855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.206 [2024-02-14 19:18:58.378173] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:21.206 [2024-02-14 19:18:58.394849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.206 [2024-02-14 19:18:58.394888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:21.206 [2024-02-14 19:18:58.394921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.677 ms 00:18:21.206 [2024-02-14 19:18:58.394931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.206 [2024-02-14 19:18:58.395040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.207 [2024-02-14 19:18:58.395061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:21.207 [2024-02-14 19:18:58.395073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:21.207 [2024-02-14 19:18:58.395082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.207 [2024-02-14 19:18:58.399665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.207 [2024-02-14 19:18:58.399701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:21.207 [2024-02-14 19:18:58.399733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.529 ms 00:18:21.207 [2024-02-14 19:18:58.399749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.207 [2024-02-14 19:18:58.399870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.207 [2024-02-14 19:18:58.399889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:21.207 [2024-02-14 19:18:58.399901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:21.207 [2024-02-14 19:18:58.399911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.207 [2024-02-14 19:18:58.399947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.207 [2024-02-14 19:18:58.399961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:21.207 [2024-02-14 19:18:58.399973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:21.207 [2024-02-14 19:18:58.399982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.207 [2024-02-14 19:18:58.400016] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:21.207 [2024-02-14 19:18:58.403973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.207 [2024-02-14 19:18:58.404008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:21.207 [2024-02-14 19:18:58.404045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.965 ms 00:18:21.207 [2024-02-14 19:18:58.404055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.207 [2024-02-14 19:18:58.404116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.207 [2024-02-14 19:18:58.404133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:21.207 [2024-02-14 19:18:58.404145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:21.207 [2024-02-14 19:18:58.404155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.207 [2024-02-14 19:18:58.404179] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:21.207 [2024-02-14 19:18:58.404205] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:21.207 [2024-02-14 19:18:58.404241] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:21.207 [2024-02-14 19:18:58.404262] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:21.207 [2024-02-14 19:18:58.404332] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:21.207 [2024-02-14 19:18:58.404346] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:21.207 [2024-02-14 19:18:58.404359] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:21.207 [2024-02-14 19:18:58.404371] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:21.207 [2024-02-14 19:18:58.404383] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:21.207 [2024-02-14 19:18:58.404394] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:21.207 [2024-02-14 19:18:58.404403] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:21.207 [2024-02-14 19:18:58.404413] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:21.207 [2024-02-14 19:18:58.404426] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:21.207 [2024-02-14 19:18:58.404437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.207 [2024-02-14 19:18:58.404447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:21.207 [2024-02-14 19:18:58.404458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:18:21.207 [2024-02-14 19:18:58.404467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.207 [2024-02-14 19:18:58.404549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.207 [2024-02-14 19:18:58.404565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:21.207 [2024-02-14 19:18:58.404576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:21.207 [2024-02-14 19:18:58.404586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.207 [2024-02-14 19:18:58.404668] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:21.207 [2024-02-14 19:18:58.404685] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:21.207 [2024-02-14 19:18:58.404696] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:21.207 [2024-02-14 19:18:58.404707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.207 [2024-02-14 19:18:58.404717] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:21.207 [2024-02-14 19:18:58.404727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:21.207 [2024-02-14 19:18:58.404736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:21.207 [2024-02-14 19:18:58.404746] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:21.207 [2024-02-14 19:18:58.404756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:21.207 [2024-02-14 19:18:58.404765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:21.207 [2024-02-14 19:18:58.404774] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:21.207 [2024-02-14 19:18:58.404783] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:21.207 [2024-02-14 19:18:58.404792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:21.207 [2024-02-14 19:18:58.404802] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:21.207 [2024-02-14 19:18:58.404811] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:21.207 [2024-02-14 19:18:58.404821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.207 [2024-02-14 19:18:58.404831] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:21.207 [2024-02-14 19:18:58.404840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:21.207 [2024-02-14 19:18:58.404861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.207 [2024-02-14 19:18:58.404871] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:21.207 [2024-02-14 19:18:58.404880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:21.207 [2024-02-14 19:18:58.404890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:21.207 [2024-02-14 19:18:58.404899] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:21.207 [2024-02-14 19:18:58.404908] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:21.207 [2024-02-14 19:18:58.404917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:21.207 [2024-02-14 19:18:58.404926] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:21.207 [2024-02-14 19:18:58.404936] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:21.207 [2024-02-14 19:18:58.404945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:21.207 [2024-02-14 19:18:58.404953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:21.207 [2024-02-14 19:18:58.404962] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:21.207 [2024-02-14 19:18:58.404971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:21.207 [2024-02-14 19:18:58.404981] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:21.207 [2024-02-14 19:18:58.404990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:21.207 [2024-02-14 19:18:58.404999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:21.207 [2024-02-14 19:18:58.405007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:21.207 [2024-02-14 19:18:58.405017] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:21.207 [2024-02-14 19:18:58.405025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:21.207 [2024-02-14 19:18:58.405035] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:21.207 [2024-02-14 19:18:58.405044] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:21.207 [2024-02-14 19:18:58.405053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:21.207 [2024-02-14 19:18:58.405061] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:21.207 [2024-02-14 19:18:58.405076] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:21.207 [2024-02-14 19:18:58.405086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:21.207 [2024-02-14 19:18:58.405097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.207 [2024-02-14 19:18:58.405107] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:21.207 [2024-02-14 19:18:58.405116] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:21.207 [2024-02-14 19:18:58.405125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:21.207 [2024-02-14 19:18:58.405135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:21.207 [2024-02-14 19:18:58.405144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:21.207 [2024-02-14 19:18:58.405154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:21.207 [2024-02-14 19:18:58.405164] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:21.207 [2024-02-14 19:18:58.405176] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:21.207 [2024-02-14 19:18:58.405191] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:21.207 [2024-02-14 19:18:58.405202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:21.207 [2024-02-14 19:18:58.405212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:21.207 [2024-02-14 19:18:58.405222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:21.207 [2024-02-14 19:18:58.405231] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:21.207 [2024-02-14 19:18:58.405241] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:21.208 [2024-02-14 19:18:58.405251] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:21.208 [2024-02-14 19:18:58.405261] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:21.208 [2024-02-14 19:18:58.405271] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:21.208 [2024-02-14 19:18:58.405281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:21.208 [2024-02-14 19:18:58.405291] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:21.208 [2024-02-14 19:18:58.405301] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:21.208 [2024-02-14 19:18:58.405311] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:21.208 [2024-02-14 19:18:58.405321] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:21.208 [2024-02-14 19:18:58.405331] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:21.208 [2024-02-14 19:18:58.405342] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:21.208 [2024-02-14 19:18:58.405352] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:21.208 [2024-02-14 19:18:58.405362] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:21.208 [2024-02-14 19:18:58.405373] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:21.208 [2024-02-14 19:18:58.405384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.405394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:21.208 [2024-02-14 19:18:58.405404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:18:21.208 [2024-02-14 19:18:58.405414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.208 [2024-02-14 19:18:58.421766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.421808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:21.208 [2024-02-14 19:18:58.421826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.257 ms 00:18:21.208 [2024-02-14 19:18:58.421837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.208 [2024-02-14 19:18:58.421974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.421992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:21.208 [2024-02-14 19:18:58.422004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:21.208 [2024-02-14 19:18:58.422014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.208 [2024-02-14 19:18:58.470810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.470881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:21.208 [2024-02-14 19:18:58.470918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.744 ms 00:18:21.208 [2024-02-14 19:18:58.470946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.208 [2024-02-14 19:18:58.471066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.471086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:21.208 [2024-02-14 19:18:58.471106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:21.208 [2024-02-14 19:18:58.471117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.208 [2024-02-14 19:18:58.471499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.471516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:21.208 [2024-02-14 19:18:58.471528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:18:21.208 [2024-02-14 19:18:58.471539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.208 [2024-02-14 19:18:58.471731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.471763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:21.208 [2024-02-14 19:18:58.471775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:18:21.208 [2024-02-14 19:18:58.471790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.208 [2024-02-14 19:18:58.488617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.488657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:21.208 [2024-02-14 19:18:58.488694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.796 ms 00:18:21.208 [2024-02-14 19:18:58.488705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.208 [2024-02-14 19:18:58.503524] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:21.208 [2024-02-14 19:18:58.503564] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:21.208 [2024-02-14 19:18:58.503596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.503608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:21.208 [2024-02-14 19:18:58.503620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.767 ms 00:18:21.208 [2024-02-14 19:18:58.503629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.208 [2024-02-14 19:18:58.530352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.530413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:21.208 [2024-02-14 19:18:58.530447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.637 ms 00:18:21.208 [2024-02-14 19:18:58.530458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.208 [2024-02-14 19:18:58.544836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.544890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:21.208 [2024-02-14 19:18:58.544922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.256 ms 00:18:21.208 [2024-02-14 19:18:58.544932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.208 [2024-02-14 19:18:58.559124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.559162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:21.208 [2024-02-14 19:18:58.559194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.108 ms 00:18:21.208 [2024-02-14 19:18:58.559204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.208 [2024-02-14 19:18:58.559685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.208 [2024-02-14 19:18:58.559708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:21.208 [2024-02-14 19:18:58.559721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:18:21.208 [2024-02-14 19:18:58.559736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.468 [2024-02-14 19:18:58.628969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.468 [2024-02-14 19:18:58.629049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:21.468 [2024-02-14 19:18:58.629106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.199 ms 00:18:21.468 [2024-02-14 19:18:58.629134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.468 [2024-02-14 19:18:58.640649] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:21.468 [2024-02-14 19:18:58.653239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.468 [2024-02-14 19:18:58.653295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:21.468 [2024-02-14 19:18:58.653329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.977 ms 00:18:21.468 [2024-02-14 19:18:58.653340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.468 [2024-02-14 19:18:58.653461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.468 [2024-02-14 19:18:58.653479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:21.468 [2024-02-14 19:18:58.653491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:21.468 [2024-02-14 19:18:58.653545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.468 [2024-02-14 19:18:58.653625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.468 [2024-02-14 19:18:58.653651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:21.468 [2024-02-14 19:18:58.653697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:21.468 [2024-02-14 19:18:58.653709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.468 [2024-02-14 19:18:58.655588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.468 [2024-02-14 19:18:58.655626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:21.468 [2024-02-14 19:18:58.655657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.847 ms 00:18:21.468 [2024-02-14 19:18:58.655667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.468 [2024-02-14 19:18:58.655724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.468 [2024-02-14 19:18:58.655738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:21.468 [2024-02-14 19:18:58.655750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:21.468 [2024-02-14 19:18:58.655760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.468 [2024-02-14 19:18:58.655816] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:21.468 [2024-02-14 19:18:58.655832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.468 [2024-02-14 19:18:58.655842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:21.468 [2024-02-14 19:18:58.655853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:21.468 [2024-02-14 19:18:58.655863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.468 [2024-02-14 19:18:58.683970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.468 [2024-02-14 19:18:58.684010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:21.468 [2024-02-14 19:18:58.684043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.083 ms 00:18:21.468 [2024-02-14 19:18:58.684054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.468 [2024-02-14 19:18:58.684167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.468 [2024-02-14 19:18:58.684186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:21.468 [2024-02-14 19:18:58.684198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:21.468 [2024-02-14 19:18:58.684208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.468 [2024-02-14 19:18:58.685331] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:21.468 [2024-02-14 19:18:58.689145] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.007 ms, result 0 00:18:21.468 [2024-02-14 19:18:58.690017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:21.468 [2024-02-14 19:18:58.705287] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:33.225  Copying: 25/256 [MB] (25 MBps) Copying: 48/256 [MB] (22 MBps) Copying: 70/256 [MB] (22 MBps) Copying: 92/256 [MB] (22 MBps) Copying: 113/256 [MB] (20 MBps) Copying: 132/256 [MB] (19 MBps) Copying: 153/256 [MB] (20 MBps) Copying: 174/256 [MB] (20 MBps) Copying: 195/256 [MB] (20 MBps) Copying: 217/256 [MB] (21 MBps) Copying: 239/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 21 MBps)[2024-02-14 19:19:10.483784] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:18:33.225 [2024-02-14 19:19:10.483903] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:33.225 [2024-02-14 19:19:10.495483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.225 [2024-02-14 19:19:10.495719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:33.225 [2024-02-14 19:19:10.495845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:33.225 [2024-02-14 19:19:10.496004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.225 [2024-02-14 19:19:10.496085] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:33.225 [2024-02-14 19:19:10.499388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.225 [2024-02-14 19:19:10.499590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:33.225 [2024-02-14 19:19:10.499713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:18:33.225 [2024-02-14 19:19:10.499734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.225 [2024-02-14 19:19:10.500090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.225 [2024-02-14 19:19:10.500120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:33.225 [2024-02-14 19:19:10.500136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:18:33.225 [2024-02-14 19:19:10.500146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.225 [2024-02-14 19:19:10.503771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.225 [2024-02-14 19:19:10.503799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:33.225 [2024-02-14 19:19:10.503829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.601 ms 00:18:33.225 [2024-02-14 19:19:10.503839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.225 [2024-02-14 19:19:10.510980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.225 [2024-02-14 19:19:10.511017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:33.225 [2024-02-14 19:19:10.511036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.108 ms 00:18:33.225 [2024-02-14 19:19:10.511046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.225 [2024-02-14 19:19:10.538928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.225 [2024-02-14 19:19:10.538967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:33.225 [2024-02-14 19:19:10.538999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.813 ms 00:18:33.225 [2024-02-14 19:19:10.539010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.225 [2024-02-14 19:19:10.555902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.225 [2024-02-14 19:19:10.555941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:33.225 [2024-02-14 19:19:10.555974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.833 ms 00:18:33.225 [2024-02-14 19:19:10.555984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.225 [2024-02-14 19:19:10.556145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.225 [2024-02-14 19:19:10.556179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:33.225 [2024-02-14 19:19:10.556191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:33.225 [2024-02-14 19:19:10.556207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.225 [2024-02-14 19:19:10.584213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.225 [2024-02-14 19:19:10.584251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:33.225 [2024-02-14 19:19:10.584282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.984 ms 00:18:33.225 [2024-02-14 19:19:10.584292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.225 [2024-02-14 19:19:10.614486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.225 [2024-02-14 19:19:10.614564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:33.225 [2024-02-14 19:19:10.614582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.140 ms 00:18:33.225 [2024-02-14 19:19:10.614593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.485 [2024-02-14 19:19:10.644031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.485 [2024-02-14 19:19:10.644068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:33.485 [2024-02-14 19:19:10.644101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.377 ms 00:18:33.485 [2024-02-14 19:19:10.644111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.485 [2024-02-14 19:19:10.671701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.485 [2024-02-14 19:19:10.671738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:33.485 [2024-02-14 19:19:10.671770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.507 ms 00:18:33.485 [2024-02-14 19:19:10.671779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.485 [2024-02-14 19:19:10.671835] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:33.485 [2024-02-14 19:19:10.671872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.671885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.671911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.671921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.671932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.671942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.671952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.671963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.671973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.671984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.671994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:33.485 [2024-02-14 19:19:10.672127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.672984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.673011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.673038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.673050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.673062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.673087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:33.486 [2024-02-14 19:19:10.673108] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:33.486 [2024-02-14 19:19:10.673119] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d840bd34-7af4-4abc-a654-d3a1fdd47b5d 00:18:33.486 [2024-02-14 19:19:10.673131] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:33.486 [2024-02-14 19:19:10.673147] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:33.486 [2024-02-14 19:19:10.673157] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:33.486 [2024-02-14 19:19:10.673168] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:33.486 [2024-02-14 19:19:10.673179] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:33.486 [2024-02-14 19:19:10.673190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:33.486 [2024-02-14 19:19:10.673201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:33.486 [2024-02-14 19:19:10.673212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:33.486 [2024-02-14 19:19:10.673221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:33.486 [2024-02-14 19:19:10.673232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.486 [2024-02-14 19:19:10.673258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:33.486 [2024-02-14 19:19:10.673270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.399 ms 00:18:33.486 [2024-02-14 19:19:10.673281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.486 [2024-02-14 19:19:10.687891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.486 [2024-02-14 19:19:10.687942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:33.486 [2024-02-14 19:19:10.687975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.585 ms 00:18:33.486 [2024-02-14 19:19:10.687985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.486 [2024-02-14 19:19:10.688243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.486 [2024-02-14 19:19:10.688261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:33.486 [2024-02-14 19:19:10.688302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:18:33.486 [2024-02-14 19:19:10.688311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.486 [2024-02-14 19:19:10.731418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.486 [2024-02-14 19:19:10.731459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.486 [2024-02-14 19:19:10.731491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.486 [2024-02-14 19:19:10.731514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.486 [2024-02-14 19:19:10.731641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.486 [2024-02-14 19:19:10.731657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.486 [2024-02-14 19:19:10.731669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.486 [2024-02-14 19:19:10.731680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.486 [2024-02-14 19:19:10.731739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.486 [2024-02-14 19:19:10.731756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.486 [2024-02-14 19:19:10.731767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.486 [2024-02-14 19:19:10.731777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.486 [2024-02-14 19:19:10.731801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.486 [2024-02-14 19:19:10.731814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.486 [2024-02-14 19:19:10.731841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.486 [2024-02-14 19:19:10.731867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.486 [2024-02-14 19:19:10.816899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.487 [2024-02-14 19:19:10.816955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.487 [2024-02-14 19:19:10.816989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.487 [2024-02-14 19:19:10.817000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.487 [2024-02-14 19:19:10.851308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.487 [2024-02-14 19:19:10.851345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.487 [2024-02-14 19:19:10.851376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.487 [2024-02-14 19:19:10.851386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.487 [2024-02-14 19:19:10.851440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.487 [2024-02-14 19:19:10.851463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.487 [2024-02-14 19:19:10.851474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.487 [2024-02-14 19:19:10.851484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.487 [2024-02-14 19:19:10.851573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.487 [2024-02-14 19:19:10.851590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.487 [2024-02-14 19:19:10.851603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.487 [2024-02-14 19:19:10.851613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.487 [2024-02-14 19:19:10.851722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.487 [2024-02-14 19:19:10.851739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.487 [2024-02-14 19:19:10.851756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.487 [2024-02-14 19:19:10.851767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.487 [2024-02-14 19:19:10.851820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.487 [2024-02-14 19:19:10.851852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:33.487 [2024-02-14 19:19:10.851897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.487 [2024-02-14 19:19:10.851908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.487 [2024-02-14 19:19:10.851971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.487 [2024-02-14 19:19:10.851987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.487 [2024-02-14 19:19:10.852025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.487 [2024-02-14 19:19:10.852036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.487 [2024-02-14 19:19:10.852092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.487 [2024-02-14 19:19:10.852110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.487 [2024-02-14 19:19:10.852122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.487 [2024-02-14 19:19:10.852134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.487 [2024-02-14 19:19:10.852313] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.830 ms, result 0 00:18:34.863 00:18:34.863 00:18:34.863 19:19:11 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:34.863 19:19:11 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:35.122 19:19:12 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:35.122 [2024-02-14 19:19:12.527542] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:35.122 [2024-02-14 19:19:12.527701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73993 ] 00:18:35.381 [2024-02-14 19:19:12.696981] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.640 [2024-02-14 19:19:12.865425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.640 [2024-02-14 19:19:12.865564] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:18:35.899 [2024-02-14 19:19:13.139280] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:35.899 [2024-02-14 19:19:13.139370] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:35.899 [2024-02-14 19:19:13.292053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.899 [2024-02-14 19:19:13.292101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:35.899 [2024-02-14 19:19:13.292137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:35.899 [2024-02-14 19:19:13.292147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.899 [2024-02-14 19:19:13.295348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.899 [2024-02-14 19:19:13.295391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.899 [2024-02-14 19:19:13.295422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.176 ms 00:18:35.899 [2024-02-14 19:19:13.295432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.899 [2024-02-14 19:19:13.295593] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:35.899 [2024-02-14 19:19:13.296613] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:35.899 [2024-02-14 19:19:13.296667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.899 [2024-02-14 19:19:13.296681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.899 [2024-02-14 19:19:13.296692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:18:35.899 [2024-02-14 19:19:13.296701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.899 [2024-02-14 19:19:13.298071] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:35.899 [2024-02-14 19:19:13.312168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.899 [2024-02-14 19:19:13.312239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:35.899 [2024-02-14 19:19:13.312255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.099 ms 00:18:35.899 [2024-02-14 19:19:13.312265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.899 [2024-02-14 19:19:13.312391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.899 [2024-02-14 19:19:13.312411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:35.899 [2024-02-14 19:19:13.312423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:35.899 [2024-02-14 19:19:13.312432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.160 [2024-02-14 19:19:13.317633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.160 [2024-02-14 19:19:13.317735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:36.160 [2024-02-14 19:19:13.317769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.148 ms 00:18:36.160 [2024-02-14 19:19:13.317786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.160 [2024-02-14 19:19:13.317922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.160 [2024-02-14 19:19:13.317943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:36.160 [2024-02-14 19:19:13.317956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:36.160 [2024-02-14 19:19:13.317966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.160 [2024-02-14 19:19:13.318022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.160 [2024-02-14 19:19:13.318037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:36.160 [2024-02-14 19:19:13.318049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:36.160 [2024-02-14 19:19:13.318060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.160 [2024-02-14 19:19:13.318111] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:36.160 [2024-02-14 19:19:13.322241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.160 [2024-02-14 19:19:13.322277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:36.160 [2024-02-14 19:19:13.322313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.153 ms 00:18:36.160 [2024-02-14 19:19:13.322322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.160 [2024-02-14 19:19:13.322398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.160 [2024-02-14 19:19:13.322415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:36.160 [2024-02-14 19:19:13.322426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:36.160 [2024-02-14 19:19:13.322435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.160 [2024-02-14 19:19:13.322462] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:36.160 [2024-02-14 19:19:13.322487] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:36.160 [2024-02-14 19:19:13.322559] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:36.160 [2024-02-14 19:19:13.322585] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:36.160 [2024-02-14 19:19:13.322656] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:36.160 [2024-02-14 19:19:13.322670] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:36.160 [2024-02-14 19:19:13.322682] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:36.160 [2024-02-14 19:19:13.322695] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:36.160 [2024-02-14 19:19:13.322706] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:36.160 [2024-02-14 19:19:13.322717] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:36.160 [2024-02-14 19:19:13.322726] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:36.160 [2024-02-14 19:19:13.322735] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:36.160 [2024-02-14 19:19:13.322748] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:36.160 [2024-02-14 19:19:13.322758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.160 [2024-02-14 19:19:13.322768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:36.160 [2024-02-14 19:19:13.322778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:18:36.160 [2024-02-14 19:19:13.322787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.160 [2024-02-14 19:19:13.322869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.160 [2024-02-14 19:19:13.322898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:36.160 [2024-02-14 19:19:13.322908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:36.160 [2024-02-14 19:19:13.322917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.161 [2024-02-14 19:19:13.322999] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:36.161 [2024-02-14 19:19:13.323014] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:36.161 [2024-02-14 19:19:13.323025] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:36.161 [2024-02-14 19:19:13.323035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.161 [2024-02-14 19:19:13.323045] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:36.161 [2024-02-14 19:19:13.323054] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:36.161 [2024-02-14 19:19:13.323063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:36.161 [2024-02-14 19:19:13.323073] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:36.161 [2024-02-14 19:19:13.323082] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:36.161 [2024-02-14 19:19:13.323091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:36.161 [2024-02-14 19:19:13.323099] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:36.161 [2024-02-14 19:19:13.323108] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:36.161 [2024-02-14 19:19:13.323116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:36.161 [2024-02-14 19:19:13.323126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:36.161 [2024-02-14 19:19:13.323134] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:36.161 [2024-02-14 19:19:13.323143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.161 [2024-02-14 19:19:13.323152] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:36.161 [2024-02-14 19:19:13.323160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:36.161 [2024-02-14 19:19:13.323182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.161 [2024-02-14 19:19:13.323191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:36.161 [2024-02-14 19:19:13.323200] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:36.161 [2024-02-14 19:19:13.323209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:36.161 [2024-02-14 19:19:13.323217] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:36.161 [2024-02-14 19:19:13.323226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:36.161 [2024-02-14 19:19:13.323249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:36.161 [2024-02-14 19:19:13.323258] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:36.161 [2024-02-14 19:19:13.323266] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:36.161 [2024-02-14 19:19:13.323274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:36.161 [2024-02-14 19:19:13.323283] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:36.161 [2024-02-14 19:19:13.323291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:36.161 [2024-02-14 19:19:13.323299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:36.161 [2024-02-14 19:19:13.323307] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:36.161 [2024-02-14 19:19:13.323315] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:36.161 [2024-02-14 19:19:13.323323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:36.161 [2024-02-14 19:19:13.323331] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:36.161 [2024-02-14 19:19:13.323339] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:36.161 [2024-02-14 19:19:13.323348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:36.161 [2024-02-14 19:19:13.323356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:36.161 [2024-02-14 19:19:13.323364] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:36.161 [2024-02-14 19:19:13.323372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:36.161 [2024-02-14 19:19:13.323380] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:36.161 [2024-02-14 19:19:13.323395] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:36.161 [2024-02-14 19:19:13.323404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:36.161 [2024-02-14 19:19:13.323414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.161 [2024-02-14 19:19:13.323424] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:36.161 [2024-02-14 19:19:13.323434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:36.161 [2024-02-14 19:19:13.323442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:36.161 [2024-02-14 19:19:13.323451] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:36.161 [2024-02-14 19:19:13.323459] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:36.161 [2024-02-14 19:19:13.323467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:36.161 [2024-02-14 19:19:13.323477] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:36.161 [2024-02-14 19:19:13.323489] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:36.161 [2024-02-14 19:19:13.323503] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:36.161 [2024-02-14 19:19:13.323512] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:36.161 [2024-02-14 19:19:13.323522] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:36.161 [2024-02-14 19:19:13.323531] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:36.161 [2024-02-14 19:19:13.323540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:36.161 [2024-02-14 19:19:13.323984] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:36.161 [2024-02-14 19:19:13.324151] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:36.161 [2024-02-14 19:19:13.324276] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:36.161 [2024-02-14 19:19:13.324396] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:36.161 [2024-02-14 19:19:13.324469] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:36.161 [2024-02-14 19:19:13.324541] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:36.161 [2024-02-14 19:19:13.324595] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:36.161 [2024-02-14 19:19:13.324728] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:36.161 [2024-02-14 19:19:13.324807] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:36.161 [2024-02-14 19:19:13.324923] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:36.161 [2024-02-14 19:19:13.324999] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:36.161 [2024-02-14 19:19:13.325050] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:36.161 [2024-02-14 19:19:13.325100] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:36.161 [2024-02-14 19:19:13.325244] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:36.161 [2024-02-14 19:19:13.325268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.161 [2024-02-14 19:19:13.325281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:36.161 [2024-02-14 19:19:13.325293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.309 ms 00:18:36.161 [2024-02-14 19:19:13.325303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.161 [2024-02-14 19:19:13.342917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.161 [2024-02-14 19:19:13.343098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:36.161 [2024-02-14 19:19:13.343261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.542 ms 00:18:36.161 [2024-02-14 19:19:13.343379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.161 [2024-02-14 19:19:13.343596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.161 [2024-02-14 19:19:13.343674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:36.161 [2024-02-14 19:19:13.343784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:36.161 [2024-02-14 19:19:13.343925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.161 [2024-02-14 19:19:13.387916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.161 [2024-02-14 19:19:13.388134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:36.161 [2024-02-14 19:19:13.388292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.902 ms 00:18:36.161 [2024-02-14 19:19:13.388342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.161 [2024-02-14 19:19:13.388467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.161 [2024-02-14 19:19:13.388582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:36.161 [2024-02-14 19:19:13.388634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:36.161 [2024-02-14 19:19:13.388736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.161 [2024-02-14 19:19:13.389148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.161 [2024-02-14 19:19:13.389315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:36.161 [2024-02-14 19:19:13.389439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:18:36.161 [2024-02-14 19:19:13.389603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.161 [2024-02-14 19:19:13.389836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.161 [2024-02-14 19:19:13.389896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:36.161 [2024-02-14 19:19:13.390037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:18:36.161 [2024-02-14 19:19:13.390092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.162 [2024-02-14 19:19:13.406367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.162 [2024-02-14 19:19:13.406582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:36.162 [2024-02-14 19:19:13.406701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.213 ms 00:18:36.162 [2024-02-14 19:19:13.406754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.162 [2024-02-14 19:19:13.422071] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:36.162 [2024-02-14 19:19:13.422289] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:36.162 [2024-02-14 19:19:13.422546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.162 [2024-02-14 19:19:13.422655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:36.162 [2024-02-14 19:19:13.422798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.530 ms 00:18:36.162 [2024-02-14 19:19:13.422849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.162 [2024-02-14 19:19:13.452043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.162 [2024-02-14 19:19:13.452257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:36.162 [2024-02-14 19:19:13.452401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.950 ms 00:18:36.162 [2024-02-14 19:19:13.452449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.162 [2024-02-14 19:19:13.467749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.162 [2024-02-14 19:19:13.467923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:36.162 [2024-02-14 19:19:13.468065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.083 ms 00:18:36.162 [2024-02-14 19:19:13.468087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.162 [2024-02-14 19:19:13.482715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.162 [2024-02-14 19:19:13.482754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:36.162 [2024-02-14 19:19:13.482786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.533 ms 00:18:36.162 [2024-02-14 19:19:13.482796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.162 [2024-02-14 19:19:13.483291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.162 [2024-02-14 19:19:13.483329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:36.162 [2024-02-14 19:19:13.483346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:18:36.162 [2024-02-14 19:19:13.483361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.162 [2024-02-14 19:19:13.553047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.162 [2024-02-14 19:19:13.553105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:36.162 [2024-02-14 19:19:13.553146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.646 ms 00:18:36.162 [2024-02-14 19:19:13.553157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.162 [2024-02-14 19:19:13.565092] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:36.421 [2024-02-14 19:19:13.579370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.421 [2024-02-14 19:19:13.579428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:36.421 [2024-02-14 19:19:13.579463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.072 ms 00:18:36.421 [2024-02-14 19:19:13.579481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.421 [2024-02-14 19:19:13.579650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.421 [2024-02-14 19:19:13.579670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:36.421 [2024-02-14 19:19:13.579687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:36.421 [2024-02-14 19:19:13.579697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.421 [2024-02-14 19:19:13.579757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.421 [2024-02-14 19:19:13.579779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:36.421 [2024-02-14 19:19:13.579790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:36.421 [2024-02-14 19:19:13.579801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.421 [2024-02-14 19:19:13.581804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.421 [2024-02-14 19:19:13.581843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:36.421 [2024-02-14 19:19:13.581859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.976 ms 00:18:36.421 [2024-02-14 19:19:13.581870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.421 [2024-02-14 19:19:13.581914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.421 [2024-02-14 19:19:13.581929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:36.421 [2024-02-14 19:19:13.581941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:36.421 [2024-02-14 19:19:13.581952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.421 [2024-02-14 19:19:13.582041] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:36.421 [2024-02-14 19:19:13.582072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.421 [2024-02-14 19:19:13.582082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:36.421 [2024-02-14 19:19:13.582092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:36.421 [2024-02-14 19:19:13.582101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.421 [2024-02-14 19:19:13.611521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.421 [2024-02-14 19:19:13.611762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:36.421 [2024-02-14 19:19:13.611797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.386 ms 00:18:36.421 [2024-02-14 19:19:13.611811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.421 [2024-02-14 19:19:13.611959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.421 [2024-02-14 19:19:13.611980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:36.421 [2024-02-14 19:19:13.611993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:36.421 [2024-02-14 19:19:13.612013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.421 [2024-02-14 19:19:13.613273] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:36.421 [2024-02-14 19:19:13.617074] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.844 ms, result 0 00:18:36.421 [2024-02-14 19:19:13.617910] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:36.421 [2024-02-14 19:19:13.633770] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:36.421  Copying: 4096/4096 [kB] (average 21 MBps)[2024-02-14 19:19:13.822532] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:18:36.421 [2024-02-14 19:19:13.822637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:36.421 [2024-02-14 19:19:13.833491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.421 [2024-02-14 19:19:13.833587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:36.421 [2024-02-14 19:19:13.833623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:36.421 [2024-02-14 19:19:13.833634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.421 [2024-02-14 19:19:13.833707] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:36.421 [2024-02-14 19:19:13.837084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.421 [2024-02-14 19:19:13.837116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:36.421 [2024-02-14 19:19:13.837148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.355 ms 00:18:36.421 [2024-02-14 19:19:13.837167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.681 [2024-02-14 19:19:13.839072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.681 [2024-02-14 19:19:13.839111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:36.681 [2024-02-14 19:19:13.839160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:18:36.681 [2024-02-14 19:19:13.839170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.681 [2024-02-14 19:19:13.843611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.681 [2024-02-14 19:19:13.843650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:36.681 [2024-02-14 19:19:13.843683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.401 ms 00:18:36.681 [2024-02-14 19:19:13.843693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.681 [2024-02-14 19:19:13.851475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.681 [2024-02-14 19:19:13.851538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:36.681 [2024-02-14 19:19:13.851570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.730 ms 00:18:36.681 [2024-02-14 19:19:13.851580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.681 [2024-02-14 19:19:13.880337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.681 [2024-02-14 19:19:13.880376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:36.681 [2024-02-14 19:19:13.880408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.686 ms 00:18:36.681 [2024-02-14 19:19:13.880417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.681 [2024-02-14 19:19:13.897015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.681 [2024-02-14 19:19:13.897054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:36.681 [2024-02-14 19:19:13.897087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.452 ms 00:18:36.681 [2024-02-14 19:19:13.897097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.681 [2024-02-14 19:19:13.897279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.681 [2024-02-14 19:19:13.897298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:36.681 [2024-02-14 19:19:13.897316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:36.681 [2024-02-14 19:19:13.897327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.681 [2024-02-14 19:19:13.926465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.681 [2024-02-14 19:19:13.926544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:36.681 [2024-02-14 19:19:13.926578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.118 ms 00:18:36.681 [2024-02-14 19:19:13.926587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.681 [2024-02-14 19:19:13.955187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.681 [2024-02-14 19:19:13.955255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:36.681 [2024-02-14 19:19:13.955285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.507 ms 00:18:36.681 [2024-02-14 19:19:13.955294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.681 [2024-02-14 19:19:13.984184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.681 [2024-02-14 19:19:13.984221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:36.681 [2024-02-14 19:19:13.984252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.818 ms 00:18:36.681 [2024-02-14 19:19:13.984261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.681 [2024-02-14 19:19:14.012712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.681 [2024-02-14 19:19:14.012758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:36.681 [2024-02-14 19:19:14.012790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.349 ms 00:18:36.681 [2024-02-14 19:19:14.012799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.681 [2024-02-14 19:19:14.012920] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:36.681 [2024-02-14 19:19:14.012944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.012956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.012966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.012976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.012986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.012996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:36.681 [2024-02-14 19:19:14.013125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.013993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.014004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.014015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.014026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.014037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.014048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.014059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.014071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.014082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.014094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.014119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:36.682 [2024-02-14 19:19:14.014140] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:36.682 [2024-02-14 19:19:14.014152] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d840bd34-7af4-4abc-a654-d3a1fdd47b5d 00:18:36.682 [2024-02-14 19:19:14.014168] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:36.682 [2024-02-14 19:19:14.014179] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:36.682 [2024-02-14 19:19:14.014204] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:36.682 [2024-02-14 19:19:14.014215] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:36.682 [2024-02-14 19:19:14.014224] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:36.682 [2024-02-14 19:19:14.014235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:36.682 [2024-02-14 19:19:14.014245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:36.682 [2024-02-14 19:19:14.014254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:36.683 [2024-02-14 19:19:14.014263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:36.683 [2024-02-14 19:19:14.014274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.683 [2024-02-14 19:19:14.014285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:36.683 [2024-02-14 19:19:14.014296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.356 ms 00:18:36.683 [2024-02-14 19:19:14.014306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.683 [2024-02-14 19:19:14.029173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.683 [2024-02-14 19:19:14.029224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:36.683 [2024-02-14 19:19:14.029255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.841 ms 00:18:36.683 [2024-02-14 19:19:14.029264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.683 [2024-02-14 19:19:14.029490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.683 [2024-02-14 19:19:14.029506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:36.683 [2024-02-14 19:19:14.029554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:18:36.683 [2024-02-14 19:19:14.029583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.683 [2024-02-14 19:19:14.072749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.683 [2024-02-14 19:19:14.072789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:36.683 [2024-02-14 19:19:14.072821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.683 [2024-02-14 19:19:14.072832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.683 [2024-02-14 19:19:14.072952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.683 [2024-02-14 19:19:14.072968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:36.683 [2024-02-14 19:19:14.072978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.683 [2024-02-14 19:19:14.072988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.683 [2024-02-14 19:19:14.073047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.683 [2024-02-14 19:19:14.073063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:36.683 [2024-02-14 19:19:14.073074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.683 [2024-02-14 19:19:14.073083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.683 [2024-02-14 19:19:14.073105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.683 [2024-02-14 19:19:14.073117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:36.683 [2024-02-14 19:19:14.073126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.683 [2024-02-14 19:19:14.073135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.941 [2024-02-14 19:19:14.158587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.941 [2024-02-14 19:19:14.158645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:36.941 [2024-02-14 19:19:14.158678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.941 [2024-02-14 19:19:14.158688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.941 [2024-02-14 19:19:14.192723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.941 [2024-02-14 19:19:14.192759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:36.941 [2024-02-14 19:19:14.192789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.941 [2024-02-14 19:19:14.192799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.941 [2024-02-14 19:19:14.192879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.941 [2024-02-14 19:19:14.192896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:36.941 [2024-02-14 19:19:14.192906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.941 [2024-02-14 19:19:14.192915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.941 [2024-02-14 19:19:14.192947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.941 [2024-02-14 19:19:14.192959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:36.941 [2024-02-14 19:19:14.192969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.941 [2024-02-14 19:19:14.192978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.941 [2024-02-14 19:19:14.193083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.941 [2024-02-14 19:19:14.193105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:36.941 [2024-02-14 19:19:14.193116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.941 [2024-02-14 19:19:14.193126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.941 [2024-02-14 19:19:14.193176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.941 [2024-02-14 19:19:14.193192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:36.941 [2024-02-14 19:19:14.193202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.941 [2024-02-14 19:19:14.193211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.941 [2024-02-14 19:19:14.193268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.941 [2024-02-14 19:19:14.193289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:36.941 [2024-02-14 19:19:14.193299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.941 [2024-02-14 19:19:14.193308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.941 [2024-02-14 19:19:14.193356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.942 [2024-02-14 19:19:14.193370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:36.942 [2024-02-14 19:19:14.193381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.942 [2024-02-14 19:19:14.193390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.942 [2024-02-14 19:19:14.193574] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 360.044 ms, result 0 00:18:37.877 00:18:37.877 00:18:37.877 19:19:15 -- ftl/trim.sh@93 -- # svcpid=74024 00:18:37.877 19:19:15 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:37.878 19:19:15 -- ftl/trim.sh@94 -- # waitforlisten 74024 00:18:37.878 19:19:15 -- common/autotest_common.sh@817 -- # '[' -z 74024 ']' 00:18:37.878 19:19:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.878 19:19:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:37.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.878 19:19:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.878 19:19:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:37.878 19:19:15 -- common/autotest_common.sh@10 -- # set +x 00:18:38.137 [2024-02-14 19:19:15.333040] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:38.137 [2024-02-14 19:19:15.333516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74024 ] 00:18:38.137 [2024-02-14 19:19:15.500604] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.396 [2024-02-14 19:19:15.654620] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:38.396 [2024-02-14 19:19:15.654845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.800 19:19:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:39.800 19:19:16 -- common/autotest_common.sh@850 -- # return 0 00:18:39.800 19:19:16 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:39.800 [2024-02-14 19:19:17.153463] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:39.800 [2024-02-14 19:19:17.153589] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.060 [2024-02-14 19:19:17.325731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.060 [2024-02-14 19:19:17.325801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.060 [2024-02-14 19:19:17.325840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:40.060 [2024-02-14 19:19:17.325852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.060 [2024-02-14 19:19:17.328877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.060 [2024-02-14 19:19:17.328948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.060 [2024-02-14 19:19:17.328984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:18:40.060 [2024-02-14 19:19:17.328995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.060 [2024-02-14 19:19:17.329154] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.060 [2024-02-14 19:19:17.330197] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.060 [2024-02-14 19:19:17.330238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.060 [2024-02-14 19:19:17.330284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.060 [2024-02-14 19:19:17.330309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:18:40.060 [2024-02-14 19:19:17.330320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.060 [2024-02-14 19:19:17.331681] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:40.060 [2024-02-14 19:19:17.346747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.060 [2024-02-14 19:19:17.346791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:40.060 [2024-02-14 19:19:17.346824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.072 ms 00:18:40.060 [2024-02-14 19:19:17.346836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.060 [2024-02-14 19:19:17.346955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.060 [2024-02-14 19:19:17.346977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:40.060 [2024-02-14 19:19:17.346990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:40.060 [2024-02-14 19:19:17.347001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.060 [2024-02-14 19:19:17.351640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.060 [2024-02-14 19:19:17.351699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.060 [2024-02-14 19:19:17.351715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.583 ms 00:18:40.060 [2024-02-14 19:19:17.351727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.060 [2024-02-14 19:19:17.351845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.060 [2024-02-14 19:19:17.351867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.060 [2024-02-14 19:19:17.351878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:40.060 [2024-02-14 19:19:17.351890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.060 [2024-02-14 19:19:17.351925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.060 [2024-02-14 19:19:17.351942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.060 [2024-02-14 19:19:17.351954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:40.060 [2024-02-14 19:19:17.351965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.060 [2024-02-14 19:19:17.351995] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:40.060 [2024-02-14 19:19:17.356063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.060 [2024-02-14 19:19:17.356097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.060 [2024-02-14 19:19:17.356131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.073 ms 00:18:40.060 [2024-02-14 19:19:17.356142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.060 [2024-02-14 19:19:17.356221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.060 [2024-02-14 19:19:17.356238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.060 [2024-02-14 19:19:17.356251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:40.060 [2024-02-14 19:19:17.356261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.060 [2024-02-14 19:19:17.356291] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:40.060 [2024-02-14 19:19:17.356315] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:40.060 [2024-02-14 19:19:17.356354] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:40.060 [2024-02-14 19:19:17.356373] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:40.060 [2024-02-14 19:19:17.356447] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:40.060 [2024-02-14 19:19:17.356462] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.060 [2024-02-14 19:19:17.356479] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:40.060 [2024-02-14 19:19:17.356510] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.060 [2024-02-14 19:19:17.356524] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.060 [2024-02-14 19:19:17.356573] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:40.060 [2024-02-14 19:19:17.356588] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.060 [2024-02-14 19:19:17.356599] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:40.060 [2024-02-14 19:19:17.356614] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:40.060 [2024-02-14 19:19:17.356625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.060 [2024-02-14 19:19:17.356638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.060 [2024-02-14 19:19:17.356650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:18:40.060 [2024-02-14 19:19:17.356662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.060 [2024-02-14 19:19:17.356730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.060 [2024-02-14 19:19:17.356749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.060 [2024-02-14 19:19:17.356760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:40.060 [2024-02-14 19:19:17.356772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.060 [2024-02-14 19:19:17.356936] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.060 [2024-02-14 19:19:17.356955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.060 [2024-02-14 19:19:17.356969] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.060 [2024-02-14 19:19:17.356983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.060 [2024-02-14 19:19:17.356995] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.060 [2024-02-14 19:19:17.357008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.060 [2024-02-14 19:19:17.357019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:40.060 [2024-02-14 19:19:17.357035] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.060 [2024-02-14 19:19:17.357047] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:40.060 [2024-02-14 19:19:17.357060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.060 [2024-02-14 19:19:17.357071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.060 [2024-02-14 19:19:17.357084] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:40.060 [2024-02-14 19:19:17.357095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.060 [2024-02-14 19:19:17.357109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.060 [2024-02-14 19:19:17.357120] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:40.060 [2024-02-14 19:19:17.357132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.060 [2024-02-14 19:19:17.357143] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.060 [2024-02-14 19:19:17.357156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:40.060 [2024-02-14 19:19:17.357168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.060 [2024-02-14 19:19:17.357180] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:40.060 [2024-02-14 19:19:17.357191] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:40.060 [2024-02-14 19:19:17.357203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:40.060 [2024-02-14 19:19:17.357214] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.060 [2024-02-14 19:19:17.357229] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:40.061 [2024-02-14 19:19:17.357240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.061 [2024-02-14 19:19:17.357267] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.061 [2024-02-14 19:19:17.357277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:40.061 [2024-02-14 19:19:17.357289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.061 [2024-02-14 19:19:17.357312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.061 [2024-02-14 19:19:17.357325] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:40.061 [2024-02-14 19:19:17.357335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.061 [2024-02-14 19:19:17.357347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.061 [2024-02-14 19:19:17.357358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:40.061 [2024-02-14 19:19:17.357371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.061 [2024-02-14 19:19:17.357382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.061 [2024-02-14 19:19:17.357394] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:40.061 [2024-02-14 19:19:17.357404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.061 [2024-02-14 19:19:17.357416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.061 [2024-02-14 19:19:17.357426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:40.061 [2024-02-14 19:19:17.357441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.061 [2024-02-14 19:19:17.357451] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.061 [2024-02-14 19:19:17.357464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.061 [2024-02-14 19:19:17.357477] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.061 [2024-02-14 19:19:17.357489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.061 [2024-02-14 19:19:17.357501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.061 [2024-02-14 19:19:17.357514] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.061 [2024-02-14 19:19:17.357524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.061 [2024-02-14 19:19:17.357537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.061 [2024-02-14 19:19:17.357547] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.061 [2024-02-14 19:19:17.357559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.061 [2024-02-14 19:19:17.357572] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.061 [2024-02-14 19:19:17.357588] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.061 [2024-02-14 19:19:17.357625] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:40.061 [2024-02-14 19:19:17.357641] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:40.061 [2024-02-14 19:19:17.357678] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:40.061 [2024-02-14 19:19:17.357696] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:40.061 [2024-02-14 19:19:17.357709] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:40.061 [2024-02-14 19:19:17.357724] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:40.061 [2024-02-14 19:19:17.357736] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:40.061 [2024-02-14 19:19:17.357750] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:40.061 [2024-02-14 19:19:17.357762] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:40.061 [2024-02-14 19:19:17.357776] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:40.061 [2024-02-14 19:19:17.357787] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:40.061 [2024-02-14 19:19:17.357802] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:40.061 [2024-02-14 19:19:17.357814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:40.061 [2024-02-14 19:19:17.357827] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.061 [2024-02-14 19:19:17.357841] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.061 [2024-02-14 19:19:17.357856] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.061 [2024-02-14 19:19:17.357868] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.061 [2024-02-14 19:19:17.357881] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.061 [2024-02-14 19:19:17.357894] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.061 [2024-02-14 19:19:17.357911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.061 [2024-02-14 19:19:17.357923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.061 [2024-02-14 19:19:17.357937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:18:40.061 [2024-02-14 19:19:17.357950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.061 [2024-02-14 19:19:17.376282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.061 [2024-02-14 19:19:17.376331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:40.061 [2024-02-14 19:19:17.376368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.244 ms 00:18:40.061 [2024-02-14 19:19:17.376380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.061 [2024-02-14 19:19:17.376590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.061 [2024-02-14 19:19:17.376612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:40.061 [2024-02-14 19:19:17.376628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:40.061 [2024-02-14 19:19:17.376640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.061 [2024-02-14 19:19:17.415052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.061 [2024-02-14 19:19:17.415107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:40.061 [2024-02-14 19:19:17.415146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.379 ms 00:18:40.061 [2024-02-14 19:19:17.415159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.061 [2024-02-14 19:19:17.415318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.061 [2024-02-14 19:19:17.415336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:40.061 [2024-02-14 19:19:17.415354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:40.061 [2024-02-14 19:19:17.415365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.061 [2024-02-14 19:19:17.415744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.061 [2024-02-14 19:19:17.415772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:40.061 [2024-02-14 19:19:17.415791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:18:40.061 [2024-02-14 19:19:17.415803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.061 [2024-02-14 19:19:17.415964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.061 [2024-02-14 19:19:17.415990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:40.061 [2024-02-14 19:19:17.416007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:18:40.061 [2024-02-14 19:19:17.416030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.061 [2024-02-14 19:19:17.434564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.061 [2024-02-14 19:19:17.434607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:40.061 [2024-02-14 19:19:17.434645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.502 ms 00:18:40.061 [2024-02-14 19:19:17.434656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.061 [2024-02-14 19:19:17.451048] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:40.061 [2024-02-14 19:19:17.451089] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:40.061 [2024-02-14 19:19:17.451127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.061 [2024-02-14 19:19:17.451139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:40.061 [2024-02-14 19:19:17.451154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.325 ms 00:18:40.061 [2024-02-14 19:19:17.451166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.479776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.479819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:40.321 [2024-02-14 19:19:17.479870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.490 ms 00:18:40.321 [2024-02-14 19:19:17.479883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.495191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.495259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:40.321 [2024-02-14 19:19:17.495294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.221 ms 00:18:40.321 [2024-02-14 19:19:17.495305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.510407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.510445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:40.321 [2024-02-14 19:19:17.510483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.006 ms 00:18:40.321 [2024-02-14 19:19:17.510494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.511014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.511055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:40.321 [2024-02-14 19:19:17.511074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:18:40.321 [2024-02-14 19:19:17.511091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.580697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.580760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:40.321 [2024-02-14 19:19:17.580801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.570 ms 00:18:40.321 [2024-02-14 19:19:17.580812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.592829] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:40.321 [2024-02-14 19:19:17.606127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.606233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:40.321 [2024-02-14 19:19:17.606252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.158 ms 00:18:40.321 [2024-02-14 19:19:17.606265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.606376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.606397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:40.321 [2024-02-14 19:19:17.606410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:40.321 [2024-02-14 19:19:17.606425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.606477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.606495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:40.321 [2024-02-14 19:19:17.606564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:40.321 [2024-02-14 19:19:17.606580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.608582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.608623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:40.321 [2024-02-14 19:19:17.608653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.973 ms 00:18:40.321 [2024-02-14 19:19:17.608665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.608704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.608720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:40.321 [2024-02-14 19:19:17.608731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:40.321 [2024-02-14 19:19:17.608742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.608782] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:40.321 [2024-02-14 19:19:17.608801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.608812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:40.321 [2024-02-14 19:19:17.608824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:40.321 [2024-02-14 19:19:17.608833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.637528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.637764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:40.321 [2024-02-14 19:19:17.637906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.662 ms 00:18:40.321 [2024-02-14 19:19:17.637962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.638137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.321 [2024-02-14 19:19:17.638222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:40.321 [2024-02-14 19:19:17.638340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:40.321 [2024-02-14 19:19:17.638470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.321 [2024-02-14 19:19:17.639808] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.321 [2024-02-14 19:19:17.644009] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.719 ms, result 0 00:18:40.321 [2024-02-14 19:19:17.645087] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:40.321 Some configs were skipped because the RPC state that can call them passed over. 00:18:40.321 19:19:17 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:40.596 [2024-02-14 19:19:17.946532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.596 [2024-02-14 19:19:17.946621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:40.596 [2024-02-14 19:19:17.946643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.254 ms 00:18:40.596 [2024-02-14 19:19:17.946658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.596 [2024-02-14 19:19:17.946718] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 35.436 ms, result 0 00:18:40.596 true 00:18:40.596 19:19:17 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:40.902 [2024-02-14 19:19:18.197067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.902 [2024-02-14 19:19:18.197114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:40.902 [2024-02-14 19:19:18.197153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.069 ms 00:18:40.902 [2024-02-14 19:19:18.197165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.902 [2024-02-14 19:19:18.197228] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 30.220 ms, result 0 00:18:40.902 true 00:18:40.902 19:19:18 -- ftl/trim.sh@102 -- # killprocess 74024 00:18:40.902 19:19:18 -- common/autotest_common.sh@924 -- # '[' -z 74024 ']' 00:18:40.902 19:19:18 -- common/autotest_common.sh@928 -- # kill -0 74024 00:18:40.902 19:19:18 -- common/autotest_common.sh@929 -- # uname 00:18:40.902 19:19:18 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:40.902 19:19:18 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 74024 00:18:40.902 19:19:18 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:40.902 19:19:18 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:40.902 killing process with pid 74024 00:18:40.902 19:19:18 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 74024' 00:18:40.902 19:19:18 -- common/autotest_common.sh@943 -- # kill 74024 00:18:40.902 19:19:18 -- common/autotest_common.sh@948 -- # wait 74024 00:18:41.837 [2024-02-14 19:19:19.106997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.837 [2024-02-14 19:19:19.107083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:41.837 [2024-02-14 19:19:19.107120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:41.837 [2024-02-14 19:19:19.107133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.837 [2024-02-14 19:19:19.107173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:41.837 [2024-02-14 19:19:19.110429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.837 [2024-02-14 19:19:19.110475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:41.837 [2024-02-14 19:19:19.110518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.216 ms 00:18:41.837 [2024-02-14 19:19:19.110530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.837 [2024-02-14 19:19:19.110889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.837 [2024-02-14 19:19:19.110917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:41.837 [2024-02-14 19:19:19.110935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:18:41.837 [2024-02-14 19:19:19.110948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.837 [2024-02-14 19:19:19.114920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.837 [2024-02-14 19:19:19.114964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:41.837 [2024-02-14 19:19:19.114983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.942 ms 00:18:41.837 [2024-02-14 19:19:19.114995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.837 [2024-02-14 19:19:19.122229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.837 [2024-02-14 19:19:19.122281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:41.837 [2024-02-14 19:19:19.122315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.185 ms 00:18:41.837 [2024-02-14 19:19:19.122327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.837 [2024-02-14 19:19:19.133859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.837 [2024-02-14 19:19:19.133897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:41.837 [2024-02-14 19:19:19.133919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.462 ms 00:18:41.837 [2024-02-14 19:19:19.133931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.837 [2024-02-14 19:19:19.142316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.837 [2024-02-14 19:19:19.142373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:41.837 [2024-02-14 19:19:19.142406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.336 ms 00:18:41.837 [2024-02-14 19:19:19.142423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.837 [2024-02-14 19:19:19.142582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.838 [2024-02-14 19:19:19.142602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:41.838 [2024-02-14 19:19:19.142617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:41.838 [2024-02-14 19:19:19.142628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.838 [2024-02-14 19:19:19.154754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.838 [2024-02-14 19:19:19.154805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:41.838 [2024-02-14 19:19:19.154838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.068 ms 00:18:41.838 [2024-02-14 19:19:19.154848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.838 [2024-02-14 19:19:19.167007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.838 [2024-02-14 19:19:19.167057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:41.838 [2024-02-14 19:19:19.167092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.114 ms 00:18:41.838 [2024-02-14 19:19:19.167103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.838 [2024-02-14 19:19:19.178579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.838 [2024-02-14 19:19:19.178629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:41.838 [2024-02-14 19:19:19.178662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.425 ms 00:18:41.838 [2024-02-14 19:19:19.178672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.838 [2024-02-14 19:19:19.190152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.838 [2024-02-14 19:19:19.190218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:41.838 [2024-02-14 19:19:19.190251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.410 ms 00:18:41.838 [2024-02-14 19:19:19.190261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.838 [2024-02-14 19:19:19.190304] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:41.838 [2024-02-14 19:19:19.190325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.190994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:41.838 [2024-02-14 19:19:19.191347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:41.839 [2024-02-14 19:19:19.191735] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:41.839 [2024-02-14 19:19:19.191752] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d840bd34-7af4-4abc-a654-d3a1fdd47b5d 00:18:41.839 [2024-02-14 19:19:19.191764] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:41.839 [2024-02-14 19:19:19.191777] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:41.839 [2024-02-14 19:19:19.191787] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:41.839 [2024-02-14 19:19:19.191800] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:41.839 [2024-02-14 19:19:19.191811] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:41.839 [2024-02-14 19:19:19.191826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:41.839 [2024-02-14 19:19:19.191837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:41.839 [2024-02-14 19:19:19.191849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:41.839 [2024-02-14 19:19:19.191859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:41.839 [2024-02-14 19:19:19.191872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.839 [2024-02-14 19:19:19.191884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:41.839 [2024-02-14 19:19:19.191897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.571 ms 00:18:41.839 [2024-02-14 19:19:19.191911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.839 [2024-02-14 19:19:19.207387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.839 [2024-02-14 19:19:19.207438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:41.839 [2024-02-14 19:19:19.207475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.433 ms 00:18:41.839 [2024-02-14 19:19:19.207487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.839 [2024-02-14 19:19:19.207811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.839 [2024-02-14 19:19:19.207845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:41.839 [2024-02-14 19:19:19.207881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:18:41.839 [2024-02-14 19:19:19.207894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-02-14 19:19:19.261120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.097 [2024-02-14 19:19:19.261184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:42.097 [2024-02-14 19:19:19.261234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.097 [2024-02-14 19:19:19.261245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-02-14 19:19:19.261347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.097 [2024-02-14 19:19:19.261363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:42.097 [2024-02-14 19:19:19.261379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.097 [2024-02-14 19:19:19.261389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-02-14 19:19:19.261496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.097 [2024-02-14 19:19:19.261515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:42.097 [2024-02-14 19:19:19.261531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.097 [2024-02-14 19:19:19.261543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-02-14 19:19:19.261585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.097 [2024-02-14 19:19:19.261602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:42.097 [2024-02-14 19:19:19.261616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.097 [2024-02-14 19:19:19.261629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-02-14 19:19:19.358377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.097 [2024-02-14 19:19:19.358447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:42.097 [2024-02-14 19:19:19.358485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.097 [2024-02-14 19:19:19.358513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-02-14 19:19:19.395283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.097 [2024-02-14 19:19:19.395338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:42.097 [2024-02-14 19:19:19.395376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.097 [2024-02-14 19:19:19.395387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-02-14 19:19:19.395452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.097 [2024-02-14 19:19:19.395469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:42.097 [2024-02-14 19:19:19.395486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.097 [2024-02-14 19:19:19.395496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-02-14 19:19:19.395574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.097 [2024-02-14 19:19:19.395589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:42.097 [2024-02-14 19:19:19.395618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.097 [2024-02-14 19:19:19.395630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-02-14 19:19:19.395775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.097 [2024-02-14 19:19:19.395794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:42.097 [2024-02-14 19:19:19.395809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.097 [2024-02-14 19:19:19.395821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-02-14 19:19:19.395878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.097 [2024-02-14 19:19:19.395923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:42.097 [2024-02-14 19:19:19.395940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.097 [2024-02-14 19:19:19.395952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-02-14 19:19:19.396005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.097 [2024-02-14 19:19:19.396021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:42.097 [2024-02-14 19:19:19.396038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.097 [2024-02-14 19:19:19.396049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-02-14 19:19:19.396109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.097 [2024-02-14 19:19:19.396127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:42.097 [2024-02-14 19:19:19.396142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.097 [2024-02-14 19:19:19.396153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.098 [2024-02-14 19:19:19.396319] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 289.297 ms, result 0 00:18:43.031 19:19:20 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:43.289 [2024-02-14 19:19:20.480629] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:43.289 [2024-02-14 19:19:20.480795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74091 ] 00:18:43.289 [2024-02-14 19:19:20.638008] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.547 [2024-02-14 19:19:20.807089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.547 [2024-02-14 19:19:20.807186] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:18:43.805 [2024-02-14 19:19:21.094293] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.805 [2024-02-14 19:19:21.094400] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:44.065 [2024-02-14 19:19:21.247696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.065 [2024-02-14 19:19:21.247762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:44.065 [2024-02-14 19:19:21.247797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:44.065 [2024-02-14 19:19:21.247807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.065 [2024-02-14 19:19:21.250939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.065 [2024-02-14 19:19:21.250998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:44.065 [2024-02-14 19:19:21.251030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.106 ms 00:18:44.065 [2024-02-14 19:19:21.251042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.065 [2024-02-14 19:19:21.251190] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:44.065 [2024-02-14 19:19:21.252204] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:44.065 [2024-02-14 19:19:21.252259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.065 [2024-02-14 19:19:21.252288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:44.065 [2024-02-14 19:19:21.252300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:18:44.065 [2024-02-14 19:19:21.252310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.065 [2024-02-14 19:19:21.253709] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:44.065 [2024-02-14 19:19:21.268524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-02-14 19:19:21.268587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:44.066 [2024-02-14 19:19:21.268619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.817 ms 00:18:44.066 [2024-02-14 19:19:21.268630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-02-14 19:19:21.268744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-02-14 19:19:21.268766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:44.066 [2024-02-14 19:19:21.268778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:44.066 [2024-02-14 19:19:21.268794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-02-14 19:19:21.273571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-02-14 19:19:21.273624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:44.066 [2024-02-14 19:19:21.273661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.657 ms 00:18:44.066 [2024-02-14 19:19:21.273712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-02-14 19:19:21.273847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-02-14 19:19:21.273869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:44.066 [2024-02-14 19:19:21.273882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:44.066 [2024-02-14 19:19:21.273893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-02-14 19:19:21.273934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-02-14 19:19:21.273949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:44.066 [2024-02-14 19:19:21.273961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:44.066 [2024-02-14 19:19:21.273972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-02-14 19:19:21.274008] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:44.066 [2024-02-14 19:19:21.278128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-02-14 19:19:21.278178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:44.066 [2024-02-14 19:19:21.278229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.128 ms 00:18:44.066 [2024-02-14 19:19:21.278240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-02-14 19:19:21.278317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-02-14 19:19:21.278335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:44.066 [2024-02-14 19:19:21.278346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:44.066 [2024-02-14 19:19:21.278356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-02-14 19:19:21.278380] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:44.066 [2024-02-14 19:19:21.278405] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:44.066 [2024-02-14 19:19:21.278441] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:44.066 [2024-02-14 19:19:21.278509] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:44.066 [2024-02-14 19:19:21.278600] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:44.066 [2024-02-14 19:19:21.278618] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:44.066 [2024-02-14 19:19:21.278632] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:44.066 [2024-02-14 19:19:21.278646] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:44.066 [2024-02-14 19:19:21.278658] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:44.066 [2024-02-14 19:19:21.278670] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:44.066 [2024-02-14 19:19:21.278680] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:44.066 [2024-02-14 19:19:21.278690] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:44.066 [2024-02-14 19:19:21.278705] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:44.066 [2024-02-14 19:19:21.278716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-02-14 19:19:21.278727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:44.066 [2024-02-14 19:19:21.278738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:18:44.066 [2024-02-14 19:19:21.278749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-02-14 19:19:21.278858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-02-14 19:19:21.278885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:44.066 [2024-02-14 19:19:21.278899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:44.066 [2024-02-14 19:19:21.278909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-02-14 19:19:21.279006] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:44.066 [2024-02-14 19:19:21.279024] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:44.066 [2024-02-14 19:19:21.279036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:44.066 [2024-02-14 19:19:21.279047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.066 [2024-02-14 19:19:21.279058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:44.066 [2024-02-14 19:19:21.279069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:44.066 [2024-02-14 19:19:21.279079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:44.066 [2024-02-14 19:19:21.279089] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:44.066 [2024-02-14 19:19:21.279099] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:44.066 [2024-02-14 19:19:21.279109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:44.066 [2024-02-14 19:19:21.279119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:44.066 [2024-02-14 19:19:21.279129] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:44.066 [2024-02-14 19:19:21.279139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:44.066 [2024-02-14 19:19:21.279149] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:44.066 [2024-02-14 19:19:21.279160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:44.066 [2024-02-14 19:19:21.279170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.066 [2024-02-14 19:19:21.279181] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:44.066 [2024-02-14 19:19:21.279191] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:44.066 [2024-02-14 19:19:21.279215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.066 [2024-02-14 19:19:21.279226] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:44.066 [2024-02-14 19:19:21.279236] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:44.066 [2024-02-14 19:19:21.279246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:44.066 [2024-02-14 19:19:21.279256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:44.066 [2024-02-14 19:19:21.279266] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:44.066 [2024-02-14 19:19:21.279275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:44.066 [2024-02-14 19:19:21.279285] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:44.066 [2024-02-14 19:19:21.279295] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:44.066 [2024-02-14 19:19:21.279305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:44.066 [2024-02-14 19:19:21.279315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:44.066 [2024-02-14 19:19:21.279325] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:44.066 [2024-02-14 19:19:21.279334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:44.066 [2024-02-14 19:19:21.279344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:44.066 [2024-02-14 19:19:21.279354] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:44.066 [2024-02-14 19:19:21.279364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:44.066 [2024-02-14 19:19:21.279374] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:44.066 [2024-02-14 19:19:21.279383] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:44.066 [2024-02-14 19:19:21.279393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:44.066 [2024-02-14 19:19:21.279403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:44.066 [2024-02-14 19:19:21.279413] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:44.066 [2024-02-14 19:19:21.279423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:44.066 [2024-02-14 19:19:21.279433] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:44.066 [2024-02-14 19:19:21.279464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:44.066 [2024-02-14 19:19:21.279474] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:44.066 [2024-02-14 19:19:21.279485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.066 [2024-02-14 19:19:21.279495] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:44.066 [2024-02-14 19:19:21.279539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:44.066 [2024-02-14 19:19:21.279550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:44.066 [2024-02-14 19:19:21.279563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:44.066 [2024-02-14 19:19:21.279574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:44.066 [2024-02-14 19:19:21.279585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:44.066 [2024-02-14 19:19:21.279597] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:44.066 [2024-02-14 19:19:21.279610] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:44.066 [2024-02-14 19:19:21.279628] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:44.066 [2024-02-14 19:19:21.279639] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:44.066 [2024-02-14 19:19:21.279650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:44.067 [2024-02-14 19:19:21.279661] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:44.067 [2024-02-14 19:19:21.279672] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:44.067 [2024-02-14 19:19:21.279683] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:44.067 [2024-02-14 19:19:21.279701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:44.067 [2024-02-14 19:19:21.279712] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:44.067 [2024-02-14 19:19:21.279723] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:44.067 [2024-02-14 19:19:21.279734] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:44.067 [2024-02-14 19:19:21.279745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:44.067 [2024-02-14 19:19:21.279756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:44.067 [2024-02-14 19:19:21.279768] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:44.067 [2024-02-14 19:19:21.279778] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:44.067 [2024-02-14 19:19:21.279791] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:44.067 [2024-02-14 19:19:21.279803] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:44.067 [2024-02-14 19:19:21.279814] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:44.067 [2024-02-14 19:19:21.279825] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:44.067 [2024-02-14 19:19:21.279836] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:44.067 [2024-02-14 19:19:21.279848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.279859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:44.067 [2024-02-14 19:19:21.279870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:18:44.067 [2024-02-14 19:19:21.279881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.067 [2024-02-14 19:19:21.298186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.298248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.067 [2024-02-14 19:19:21.298265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.240 ms 00:18:44.067 [2024-02-14 19:19:21.298276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.067 [2024-02-14 19:19:21.298432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.298452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:44.067 [2024-02-14 19:19:21.298464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:44.067 [2024-02-14 19:19:21.298473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.067 [2024-02-14 19:19:21.347438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.347538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.067 [2024-02-14 19:19:21.347557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.877 ms 00:18:44.067 [2024-02-14 19:19:21.347569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.067 [2024-02-14 19:19:21.347708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.347726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.067 [2024-02-14 19:19:21.347743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:44.067 [2024-02-14 19:19:21.347754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.067 [2024-02-14 19:19:21.348151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.348182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.067 [2024-02-14 19:19:21.348197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:18:44.067 [2024-02-14 19:19:21.348208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.067 [2024-02-14 19:19:21.348363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.348384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.067 [2024-02-14 19:19:21.348397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:18:44.067 [2024-02-14 19:19:21.348412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.067 [2024-02-14 19:19:21.364713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.364768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.067 [2024-02-14 19:19:21.364804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.261 ms 00:18:44.067 [2024-02-14 19:19:21.364814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.067 [2024-02-14 19:19:21.379962] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:44.067 [2024-02-14 19:19:21.380017] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:44.067 [2024-02-14 19:19:21.380049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.380060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:44.067 [2024-02-14 19:19:21.380072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.068 ms 00:18:44.067 [2024-02-14 19:19:21.380081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.067 [2024-02-14 19:19:21.406804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.406879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:44.067 [2024-02-14 19:19:21.406911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.634 ms 00:18:44.067 [2024-02-14 19:19:21.406922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.067 [2024-02-14 19:19:21.421507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.421571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:44.067 [2024-02-14 19:19:21.421603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.498 ms 00:18:44.067 [2024-02-14 19:19:21.421613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.067 [2024-02-14 19:19:21.435777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.435829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:44.067 [2024-02-14 19:19:21.435877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.041 ms 00:18:44.067 [2024-02-14 19:19:21.435887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.067 [2024-02-14 19:19:21.436394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.067 [2024-02-14 19:19:21.436430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:44.067 [2024-02-14 19:19:21.436445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:18:44.067 [2024-02-14 19:19:21.436461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.326 [2024-02-14 19:19:21.507965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.326 [2024-02-14 19:19:21.508048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:44.326 [2024-02-14 19:19:21.508076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.443 ms 00:18:44.326 [2024-02-14 19:19:21.508088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.326 [2024-02-14 19:19:21.520632] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:44.326 [2024-02-14 19:19:21.533822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.326 [2024-02-14 19:19:21.533896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:44.326 [2024-02-14 19:19:21.533916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.563 ms 00:18:44.326 [2024-02-14 19:19:21.533928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.326 [2024-02-14 19:19:21.534108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.326 [2024-02-14 19:19:21.534142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:44.326 [2024-02-14 19:19:21.534155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:44.326 [2024-02-14 19:19:21.534169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.326 [2024-02-14 19:19:21.534243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.326 [2024-02-14 19:19:21.534292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:44.326 [2024-02-14 19:19:21.534304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:44.326 [2024-02-14 19:19:21.534314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.326 [2024-02-14 19:19:21.536184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.326 [2024-02-14 19:19:21.536251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:44.326 [2024-02-14 19:19:21.536265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.839 ms 00:18:44.326 [2024-02-14 19:19:21.536290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.326 [2024-02-14 19:19:21.536331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.326 [2024-02-14 19:19:21.536346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:44.326 [2024-02-14 19:19:21.536357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:44.326 [2024-02-14 19:19:21.536367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.326 [2024-02-14 19:19:21.536404] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:44.326 [2024-02-14 19:19:21.536419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.326 [2024-02-14 19:19:21.536429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:44.326 [2024-02-14 19:19:21.536439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:44.326 [2024-02-14 19:19:21.536448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.326 [2024-02-14 19:19:21.565208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.326 [2024-02-14 19:19:21.565263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:44.326 [2024-02-14 19:19:21.565310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.717 ms 00:18:44.326 [2024-02-14 19:19:21.565320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.326 [2024-02-14 19:19:21.565436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.326 [2024-02-14 19:19:21.565455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:44.326 [2024-02-14 19:19:21.565467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:44.326 [2024-02-14 19:19:21.565477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.326 [2024-02-14 19:19:21.566514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:44.326 [2024-02-14 19:19:21.570449] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 318.434 ms, result 0 00:18:44.326 [2024-02-14 19:19:21.571323] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.326 [2024-02-14 19:19:21.587338] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:55.752  Copying: 25/256 [MB] (25 MBps) Copying: 48/256 [MB] (22 MBps) Copying: 71/256 [MB] (22 MBps) Copying: 93/256 [MB] (22 MBps) Copying: 115/256 [MB] (21 MBps) Copying: 138/256 [MB] (22 MBps) Copying: 161/256 [MB] (23 MBps) Copying: 184/256 [MB] (22 MBps) Copying: 207/256 [MB] (22 MBps) Copying: 230/256 [MB] (23 MBps) Copying: 252/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-02-14 19:19:33.126611] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:18:55.752 [2024-02-14 19:19:33.126847] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:55.752 [2024-02-14 19:19:33.142311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.752 [2024-02-14 19:19:33.142355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:55.752 [2024-02-14 19:19:33.142375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:55.752 [2024-02-14 19:19:33.142387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.752 [2024-02-14 19:19:33.142420] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:55.752 [2024-02-14 19:19:33.145784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.752 [2024-02-14 19:19:33.145840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:55.752 [2024-02-14 19:19:33.145857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:18:55.752 [2024-02-14 19:19:33.145869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.752 [2024-02-14 19:19:33.146227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.752 [2024-02-14 19:19:33.146259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:55.752 [2024-02-14 19:19:33.146273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:18:55.752 [2024-02-14 19:19:33.146284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.752 [2024-02-14 19:19:33.150124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.752 [2024-02-14 19:19:33.150158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:55.752 [2024-02-14 19:19:33.150173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.799 ms 00:18:55.752 [2024-02-14 19:19:33.150184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.752 [2024-02-14 19:19:33.157790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.752 [2024-02-14 19:19:33.157827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:55.752 [2024-02-14 19:19:33.157846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.579 ms 00:18:55.752 [2024-02-14 19:19:33.157858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.011 [2024-02-14 19:19:33.188579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.011 [2024-02-14 19:19:33.188622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:56.011 [2024-02-14 19:19:33.188639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.636 ms 00:18:56.011 [2024-02-14 19:19:33.188651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.011 [2024-02-14 19:19:33.206025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.011 [2024-02-14 19:19:33.206066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:56.011 [2024-02-14 19:19:33.206083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.312 ms 00:18:56.011 [2024-02-14 19:19:33.206104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.011 [2024-02-14 19:19:33.206263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.011 [2024-02-14 19:19:33.206283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:56.011 [2024-02-14 19:19:33.206295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:56.011 [2024-02-14 19:19:33.206313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.011 [2024-02-14 19:19:33.237307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.011 [2024-02-14 19:19:33.237346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:56.011 [2024-02-14 19:19:33.237362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.970 ms 00:18:56.012 [2024-02-14 19:19:33.237373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.012 [2024-02-14 19:19:33.268188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.012 [2024-02-14 19:19:33.268228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:56.012 [2024-02-14 19:19:33.268244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.753 ms 00:18:56.012 [2024-02-14 19:19:33.268254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.012 [2024-02-14 19:19:33.298667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.012 [2024-02-14 19:19:33.298707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:56.012 [2024-02-14 19:19:33.298723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.351 ms 00:18:56.012 [2024-02-14 19:19:33.298734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.012 [2024-02-14 19:19:33.329118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.012 [2024-02-14 19:19:33.329158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:56.012 [2024-02-14 19:19:33.329174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.290 ms 00:18:56.012 [2024-02-14 19:19:33.329185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.012 [2024-02-14 19:19:33.329245] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:56.012 [2024-02-14 19:19:33.329268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.329997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:56.012 [2024-02-14 19:19:33.330192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:56.013 [2024-02-14 19:19:33.330475] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:56.013 [2024-02-14 19:19:33.330499] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d840bd34-7af4-4abc-a654-d3a1fdd47b5d 00:18:56.013 [2024-02-14 19:19:33.330512] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:56.013 [2024-02-14 19:19:33.330528] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:56.013 [2024-02-14 19:19:33.330538] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:56.013 [2024-02-14 19:19:33.330549] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:56.013 [2024-02-14 19:19:33.330559] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:56.013 [2024-02-14 19:19:33.330570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:56.013 [2024-02-14 19:19:33.330580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:56.013 [2024-02-14 19:19:33.330590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:56.013 [2024-02-14 19:19:33.330600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:56.013 [2024-02-14 19:19:33.330611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.013 [2024-02-14 19:19:33.330622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:56.013 [2024-02-14 19:19:33.330633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:18:56.013 [2024-02-14 19:19:33.330644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.013 [2024-02-14 19:19:33.347009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.013 [2024-02-14 19:19:33.347045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:56.013 [2024-02-14 19:19:33.347061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.338 ms 00:18:56.013 [2024-02-14 19:19:33.347072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.013 [2024-02-14 19:19:33.347340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.013 [2024-02-14 19:19:33.347367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:56.013 [2024-02-14 19:19:33.347381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:18:56.013 [2024-02-14 19:19:33.347392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.013 [2024-02-14 19:19:33.395837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.013 [2024-02-14 19:19:33.395891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:56.013 [2024-02-14 19:19:33.395909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.013 [2024-02-14 19:19:33.395920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.013 [2024-02-14 19:19:33.396035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.013 [2024-02-14 19:19:33.396054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:56.013 [2024-02-14 19:19:33.396066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.013 [2024-02-14 19:19:33.396076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.013 [2024-02-14 19:19:33.396142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.013 [2024-02-14 19:19:33.396160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:56.013 [2024-02-14 19:19:33.396172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.013 [2024-02-14 19:19:33.396183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.013 [2024-02-14 19:19:33.396219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.013 [2024-02-14 19:19:33.396232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:56.013 [2024-02-14 19:19:33.396244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.013 [2024-02-14 19:19:33.396254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.272 [2024-02-14 19:19:33.493376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.272 [2024-02-14 19:19:33.493442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:56.272 [2024-02-14 19:19:33.493460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.272 [2024-02-14 19:19:33.493472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.272 [2024-02-14 19:19:33.532192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.272 [2024-02-14 19:19:33.532233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:56.272 [2024-02-14 19:19:33.532250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.272 [2024-02-14 19:19:33.532261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.272 [2024-02-14 19:19:33.532348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.272 [2024-02-14 19:19:33.532373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:56.272 [2024-02-14 19:19:33.532385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.272 [2024-02-14 19:19:33.532396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.272 [2024-02-14 19:19:33.532432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.272 [2024-02-14 19:19:33.532445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:56.272 [2024-02-14 19:19:33.532456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.272 [2024-02-14 19:19:33.532467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.272 [2024-02-14 19:19:33.532608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.272 [2024-02-14 19:19:33.532629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:56.272 [2024-02-14 19:19:33.532647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.272 [2024-02-14 19:19:33.532658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.272 [2024-02-14 19:19:33.532715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.272 [2024-02-14 19:19:33.532733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:56.272 [2024-02-14 19:19:33.532745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.273 [2024-02-14 19:19:33.532756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.273 [2024-02-14 19:19:33.532804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.273 [2024-02-14 19:19:33.532819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:56.273 [2024-02-14 19:19:33.532840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.273 [2024-02-14 19:19:33.532850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.273 [2024-02-14 19:19:33.532905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.273 [2024-02-14 19:19:33.532927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:56.273 [2024-02-14 19:19:33.532940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.273 [2024-02-14 19:19:33.532950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.273 [2024-02-14 19:19:33.533113] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 390.816 ms, result 0 00:18:57.673 00:18:57.673 00:18:57.673 19:19:34 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:57.931 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:57.931 19:19:35 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:57.931 19:19:35 -- ftl/trim.sh@109 -- # fio_kill 00:18:57.931 19:19:35 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:57.931 19:19:35 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:57.931 19:19:35 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:57.931 19:19:35 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:58.190 19:19:35 -- ftl/trim.sh@20 -- # killprocess 74024 00:18:58.190 19:19:35 -- common/autotest_common.sh@924 -- # '[' -z 74024 ']' 00:18:58.190 19:19:35 -- common/autotest_common.sh@928 -- # kill -0 74024 00:18:58.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (74024) - No such process 00:18:58.190 Process with pid 74024 is not found 00:18:58.190 19:19:35 -- common/autotest_common.sh@951 -- # echo 'Process with pid 74024 is not found' 00:18:58.190 00:18:58.190 real 1m11.895s 00:18:58.190 user 1m38.089s 00:18:58.190 sys 0m6.432s 00:18:58.190 19:19:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:58.190 ************************************ 00:18:58.190 END TEST ftl_trim 00:18:58.190 ************************************ 00:18:58.190 19:19:35 -- common/autotest_common.sh@10 -- # set +x 00:18:58.190 19:19:35 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:18:58.190 19:19:35 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:18:58.190 19:19:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:58.190 19:19:35 -- common/autotest_common.sh@10 -- # set +x 00:18:58.190 ************************************ 00:18:58.190 START TEST ftl_restore 00:18:58.190 ************************************ 00:18:58.190 19:19:35 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:18:58.190 * Looking for test storage... 00:18:58.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:58.190 19:19:35 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:58.190 19:19:35 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:58.190 19:19:35 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:58.190 19:19:35 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:58.190 19:19:35 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:58.190 19:19:35 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:58.190 19:19:35 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:58.190 19:19:35 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:58.190 19:19:35 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:58.190 19:19:35 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:58.190 19:19:35 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:58.190 19:19:35 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:58.190 19:19:35 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:58.190 19:19:35 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:58.190 19:19:35 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:58.190 19:19:35 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:58.190 19:19:35 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:58.190 19:19:35 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:58.190 19:19:35 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:58.190 19:19:35 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:58.190 19:19:35 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:58.190 19:19:35 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:58.190 19:19:35 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:58.190 19:19:35 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:58.190 19:19:35 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:58.190 19:19:35 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:58.190 19:19:35 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:58.190 19:19:35 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:58.190 19:19:35 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:58.190 19:19:35 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:58.190 19:19:35 -- ftl/restore.sh@13 -- # mktemp -d 00:18:58.190 19:19:35 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.dCrJtYDb7L 00:18:58.190 19:19:35 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:58.190 19:19:35 -- ftl/restore.sh@16 -- # case $opt in 00:18:58.190 19:19:35 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:18:58.190 19:19:35 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:58.190 19:19:35 -- ftl/restore.sh@23 -- # shift 2 00:18:58.190 19:19:35 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:18:58.190 19:19:35 -- ftl/restore.sh@25 -- # timeout=240 00:18:58.190 19:19:35 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:58.190 19:19:35 -- ftl/restore.sh@39 -- # svcpid=74302 00:18:58.190 19:19:35 -- ftl/restore.sh@41 -- # waitforlisten 74302 00:18:58.190 19:19:35 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:58.190 19:19:35 -- common/autotest_common.sh@817 -- # '[' -z 74302 ']' 00:18:58.190 19:19:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.190 19:19:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:58.190 19:19:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.190 19:19:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:58.190 19:19:35 -- common/autotest_common.sh@10 -- # set +x 00:18:58.449 [2024-02-14 19:19:35.635099] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:58.449 [2024-02-14 19:19:35.635803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74302 ] 00:18:58.449 [2024-02-14 19:19:35.799869] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.711 [2024-02-14 19:19:36.019358] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:58.711 [2024-02-14 19:19:36.019625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.088 19:19:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:00.088 19:19:37 -- common/autotest_common.sh@850 -- # return 0 00:19:00.088 19:19:37 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:19:00.088 19:19:37 -- ftl/common.sh@54 -- # local name=nvme0 00:19:00.088 19:19:37 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:19:00.088 19:19:37 -- ftl/common.sh@56 -- # local size=103424 00:19:00.088 19:19:37 -- ftl/common.sh@59 -- # local base_bdev 00:19:00.088 19:19:37 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:19:00.346 19:19:37 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:00.346 19:19:37 -- ftl/common.sh@62 -- # local base_size 00:19:00.346 19:19:37 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:00.346 19:19:37 -- common/autotest_common.sh@1355 -- # local bdev_name=nvme0n1 00:19:00.346 19:19:37 -- common/autotest_common.sh@1356 -- # local bdev_info 00:19:00.346 19:19:37 -- common/autotest_common.sh@1357 -- # local bs 00:19:00.346 19:19:37 -- common/autotest_common.sh@1358 -- # local nb 00:19:00.346 19:19:37 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:00.604 19:19:37 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:19:00.604 { 00:19:00.604 "name": "nvme0n1", 00:19:00.604 "aliases": [ 00:19:00.604 "7920448e-8c91-4a88-9bb6-5d8f51c8a4dc" 00:19:00.604 ], 00:19:00.604 "product_name": "NVMe disk", 00:19:00.604 "block_size": 4096, 00:19:00.604 "num_blocks": 1310720, 00:19:00.604 "uuid": "7920448e-8c91-4a88-9bb6-5d8f51c8a4dc", 00:19:00.604 "assigned_rate_limits": { 00:19:00.604 "rw_ios_per_sec": 0, 00:19:00.604 "rw_mbytes_per_sec": 0, 00:19:00.604 "r_mbytes_per_sec": 0, 00:19:00.604 "w_mbytes_per_sec": 0 00:19:00.604 }, 00:19:00.604 "claimed": true, 00:19:00.604 "claim_type": "read_many_write_one", 00:19:00.604 "zoned": false, 00:19:00.604 "supported_io_types": { 00:19:00.604 "read": true, 00:19:00.604 "write": true, 00:19:00.604 "unmap": true, 00:19:00.604 "write_zeroes": true, 00:19:00.604 "flush": true, 00:19:00.604 "reset": true, 00:19:00.604 "compare": true, 00:19:00.604 "compare_and_write": false, 00:19:00.604 "abort": true, 00:19:00.604 "nvme_admin": true, 00:19:00.604 "nvme_io": true 00:19:00.604 }, 00:19:00.604 "driver_specific": { 00:19:00.604 "nvme": [ 00:19:00.604 { 00:19:00.604 "pci_address": "0000:00:07.0", 00:19:00.604 "trid": { 00:19:00.604 "trtype": "PCIe", 00:19:00.604 "traddr": "0000:00:07.0" 00:19:00.604 }, 00:19:00.604 "ctrlr_data": { 00:19:00.604 "cntlid": 0, 00:19:00.604 "vendor_id": "0x1b36", 00:19:00.604 "model_number": "QEMU NVMe Ctrl", 00:19:00.604 "serial_number": "12341", 00:19:00.604 "firmware_revision": "8.0.0", 00:19:00.604 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:00.604 "oacs": { 00:19:00.604 "security": 0, 00:19:00.604 "format": 1, 00:19:00.604 "firmware": 0, 00:19:00.604 "ns_manage": 1 00:19:00.604 }, 00:19:00.604 "multi_ctrlr": false, 00:19:00.604 "ana_reporting": false 00:19:00.604 }, 00:19:00.604 "vs": { 00:19:00.604 "nvme_version": "1.4" 00:19:00.604 }, 00:19:00.604 "ns_data": { 00:19:00.604 "id": 1, 00:19:00.604 "can_share": false 00:19:00.604 } 00:19:00.604 } 00:19:00.604 ], 00:19:00.604 "mp_policy": "active_passive" 00:19:00.604 } 00:19:00.604 } 00:19:00.604 ]' 00:19:00.604 19:19:37 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:19:00.604 19:19:37 -- common/autotest_common.sh@1360 -- # bs=4096 00:19:00.604 19:19:37 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:19:00.604 19:19:37 -- common/autotest_common.sh@1361 -- # nb=1310720 00:19:00.604 19:19:37 -- common/autotest_common.sh@1364 -- # bdev_size=5120 00:19:00.604 19:19:37 -- common/autotest_common.sh@1365 -- # echo 5120 00:19:00.604 19:19:37 -- ftl/common.sh@63 -- # base_size=5120 00:19:00.604 19:19:37 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:00.604 19:19:37 -- ftl/common.sh@67 -- # clear_lvols 00:19:00.604 19:19:37 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:00.604 19:19:37 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:00.862 19:19:38 -- ftl/common.sh@28 -- # stores=2a251c8e-2a35-4085-85cd-c67f0c4a697a 00:19:00.862 19:19:38 -- ftl/common.sh@29 -- # for lvs in $stores 00:19:00.862 19:19:38 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a251c8e-2a35-4085-85cd-c67f0c4a697a 00:19:01.121 19:19:38 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:01.379 19:19:38 -- ftl/common.sh@68 -- # lvs=df0c28da-f497-4a3a-852f-a282cc5d2695 00:19:01.379 19:19:38 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u df0c28da-f497-4a3a-852f-a282cc5d2695 00:19:01.637 19:19:38 -- ftl/restore.sh@43 -- # split_bdev=a661aedc-0a4d-4c20-8ea0-c101e75a3816 00:19:01.637 19:19:38 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:19:01.637 19:19:38 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 a661aedc-0a4d-4c20-8ea0-c101e75a3816 00:19:01.637 19:19:38 -- ftl/common.sh@35 -- # local name=nvc0 00:19:01.637 19:19:38 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:19:01.637 19:19:38 -- ftl/common.sh@37 -- # local base_bdev=a661aedc-0a4d-4c20-8ea0-c101e75a3816 00:19:01.637 19:19:38 -- ftl/common.sh@38 -- # local cache_size= 00:19:01.637 19:19:38 -- ftl/common.sh@41 -- # get_bdev_size a661aedc-0a4d-4c20-8ea0-c101e75a3816 00:19:01.638 19:19:38 -- common/autotest_common.sh@1355 -- # local bdev_name=a661aedc-0a4d-4c20-8ea0-c101e75a3816 00:19:01.638 19:19:38 -- common/autotest_common.sh@1356 -- # local bdev_info 00:19:01.638 19:19:38 -- common/autotest_common.sh@1357 -- # local bs 00:19:01.638 19:19:38 -- common/autotest_common.sh@1358 -- # local nb 00:19:01.638 19:19:38 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a661aedc-0a4d-4c20-8ea0-c101e75a3816 00:19:01.895 19:19:39 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:19:01.895 { 00:19:01.896 "name": "a661aedc-0a4d-4c20-8ea0-c101e75a3816", 00:19:01.896 "aliases": [ 00:19:01.896 "lvs/nvme0n1p0" 00:19:01.896 ], 00:19:01.896 "product_name": "Logical Volume", 00:19:01.896 "block_size": 4096, 00:19:01.896 "num_blocks": 26476544, 00:19:01.896 "uuid": "a661aedc-0a4d-4c20-8ea0-c101e75a3816", 00:19:01.896 "assigned_rate_limits": { 00:19:01.896 "rw_ios_per_sec": 0, 00:19:01.896 "rw_mbytes_per_sec": 0, 00:19:01.896 "r_mbytes_per_sec": 0, 00:19:01.896 "w_mbytes_per_sec": 0 00:19:01.896 }, 00:19:01.896 "claimed": false, 00:19:01.896 "zoned": false, 00:19:01.896 "supported_io_types": { 00:19:01.896 "read": true, 00:19:01.896 "write": true, 00:19:01.896 "unmap": true, 00:19:01.896 "write_zeroes": true, 00:19:01.896 "flush": false, 00:19:01.896 "reset": true, 00:19:01.896 "compare": false, 00:19:01.896 "compare_and_write": false, 00:19:01.896 "abort": false, 00:19:01.896 "nvme_admin": false, 00:19:01.896 "nvme_io": false 00:19:01.896 }, 00:19:01.896 "driver_specific": { 00:19:01.896 "lvol": { 00:19:01.896 "lvol_store_uuid": "df0c28da-f497-4a3a-852f-a282cc5d2695", 00:19:01.896 "base_bdev": "nvme0n1", 00:19:01.896 "thin_provision": true, 00:19:01.896 "snapshot": false, 00:19:01.896 "clone": false, 00:19:01.896 "esnap_clone": false 00:19:01.896 } 00:19:01.896 } 00:19:01.896 } 00:19:01.896 ]' 00:19:01.896 19:19:39 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:19:01.896 19:19:39 -- common/autotest_common.sh@1360 -- # bs=4096 00:19:01.896 19:19:39 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:19:01.896 19:19:39 -- common/autotest_common.sh@1361 -- # nb=26476544 00:19:01.896 19:19:39 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:19:01.896 19:19:39 -- common/autotest_common.sh@1365 -- # echo 103424 00:19:01.896 19:19:39 -- ftl/common.sh@41 -- # local base_size=5171 00:19:01.896 19:19:39 -- ftl/common.sh@44 -- # local nvc_bdev 00:19:01.896 19:19:39 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:19:02.153 19:19:39 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:02.153 19:19:39 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:02.153 19:19:39 -- ftl/common.sh@48 -- # get_bdev_size a661aedc-0a4d-4c20-8ea0-c101e75a3816 00:19:02.153 19:19:39 -- common/autotest_common.sh@1355 -- # local bdev_name=a661aedc-0a4d-4c20-8ea0-c101e75a3816 00:19:02.153 19:19:39 -- common/autotest_common.sh@1356 -- # local bdev_info 00:19:02.153 19:19:39 -- common/autotest_common.sh@1357 -- # local bs 00:19:02.153 19:19:39 -- common/autotest_common.sh@1358 -- # local nb 00:19:02.153 19:19:39 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a661aedc-0a4d-4c20-8ea0-c101e75a3816 00:19:02.411 19:19:39 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:19:02.411 { 00:19:02.411 "name": "a661aedc-0a4d-4c20-8ea0-c101e75a3816", 00:19:02.411 "aliases": [ 00:19:02.411 "lvs/nvme0n1p0" 00:19:02.411 ], 00:19:02.411 "product_name": "Logical Volume", 00:19:02.411 "block_size": 4096, 00:19:02.411 "num_blocks": 26476544, 00:19:02.411 "uuid": "a661aedc-0a4d-4c20-8ea0-c101e75a3816", 00:19:02.411 "assigned_rate_limits": { 00:19:02.411 "rw_ios_per_sec": 0, 00:19:02.411 "rw_mbytes_per_sec": 0, 00:19:02.411 "r_mbytes_per_sec": 0, 00:19:02.411 "w_mbytes_per_sec": 0 00:19:02.411 }, 00:19:02.411 "claimed": false, 00:19:02.411 "zoned": false, 00:19:02.411 "supported_io_types": { 00:19:02.411 "read": true, 00:19:02.411 "write": true, 00:19:02.411 "unmap": true, 00:19:02.411 "write_zeroes": true, 00:19:02.411 "flush": false, 00:19:02.411 "reset": true, 00:19:02.411 "compare": false, 00:19:02.411 "compare_and_write": false, 00:19:02.411 "abort": false, 00:19:02.411 "nvme_admin": false, 00:19:02.411 "nvme_io": false 00:19:02.411 }, 00:19:02.411 "driver_specific": { 00:19:02.411 "lvol": { 00:19:02.411 "lvol_store_uuid": "df0c28da-f497-4a3a-852f-a282cc5d2695", 00:19:02.411 "base_bdev": "nvme0n1", 00:19:02.411 "thin_provision": true, 00:19:02.411 "snapshot": false, 00:19:02.411 "clone": false, 00:19:02.411 "esnap_clone": false 00:19:02.411 } 00:19:02.411 } 00:19:02.411 } 00:19:02.411 ]' 00:19:02.411 19:19:39 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:19:02.669 19:19:39 -- common/autotest_common.sh@1360 -- # bs=4096 00:19:02.669 19:19:39 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:19:02.669 19:19:39 -- common/autotest_common.sh@1361 -- # nb=26476544 00:19:02.669 19:19:39 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:19:02.669 19:19:39 -- common/autotest_common.sh@1365 -- # echo 103424 00:19:02.669 19:19:39 -- ftl/common.sh@48 -- # cache_size=5171 00:19:02.669 19:19:39 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:02.926 19:19:40 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:02.926 19:19:40 -- ftl/restore.sh@48 -- # get_bdev_size a661aedc-0a4d-4c20-8ea0-c101e75a3816 00:19:02.926 19:19:40 -- common/autotest_common.sh@1355 -- # local bdev_name=a661aedc-0a4d-4c20-8ea0-c101e75a3816 00:19:02.926 19:19:40 -- common/autotest_common.sh@1356 -- # local bdev_info 00:19:02.926 19:19:40 -- common/autotest_common.sh@1357 -- # local bs 00:19:02.926 19:19:40 -- common/autotest_common.sh@1358 -- # local nb 00:19:02.926 19:19:40 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a661aedc-0a4d-4c20-8ea0-c101e75a3816 00:19:03.184 19:19:40 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:19:03.184 { 00:19:03.184 "name": "a661aedc-0a4d-4c20-8ea0-c101e75a3816", 00:19:03.184 "aliases": [ 00:19:03.184 "lvs/nvme0n1p0" 00:19:03.184 ], 00:19:03.184 "product_name": "Logical Volume", 00:19:03.184 "block_size": 4096, 00:19:03.184 "num_blocks": 26476544, 00:19:03.184 "uuid": "a661aedc-0a4d-4c20-8ea0-c101e75a3816", 00:19:03.184 "assigned_rate_limits": { 00:19:03.184 "rw_ios_per_sec": 0, 00:19:03.184 "rw_mbytes_per_sec": 0, 00:19:03.184 "r_mbytes_per_sec": 0, 00:19:03.184 "w_mbytes_per_sec": 0 00:19:03.184 }, 00:19:03.184 "claimed": false, 00:19:03.184 "zoned": false, 00:19:03.184 "supported_io_types": { 00:19:03.184 "read": true, 00:19:03.184 "write": true, 00:19:03.184 "unmap": true, 00:19:03.184 "write_zeroes": true, 00:19:03.185 "flush": false, 00:19:03.185 "reset": true, 00:19:03.185 "compare": false, 00:19:03.185 "compare_and_write": false, 00:19:03.185 "abort": false, 00:19:03.185 "nvme_admin": false, 00:19:03.185 "nvme_io": false 00:19:03.185 }, 00:19:03.185 "driver_specific": { 00:19:03.185 "lvol": { 00:19:03.185 "lvol_store_uuid": "df0c28da-f497-4a3a-852f-a282cc5d2695", 00:19:03.185 "base_bdev": "nvme0n1", 00:19:03.185 "thin_provision": true, 00:19:03.185 "snapshot": false, 00:19:03.185 "clone": false, 00:19:03.185 "esnap_clone": false 00:19:03.185 } 00:19:03.185 } 00:19:03.185 } 00:19:03.185 ]' 00:19:03.185 19:19:40 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:19:03.185 19:19:40 -- common/autotest_common.sh@1360 -- # bs=4096 00:19:03.185 19:19:40 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:19:03.185 19:19:40 -- common/autotest_common.sh@1361 -- # nb=26476544 00:19:03.185 19:19:40 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:19:03.185 19:19:40 -- common/autotest_common.sh@1365 -- # echo 103424 00:19:03.185 19:19:40 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:03.185 19:19:40 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a661aedc-0a4d-4c20-8ea0-c101e75a3816 --l2p_dram_limit 10' 00:19:03.185 19:19:40 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:03.185 19:19:40 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:19:03.185 19:19:40 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:03.185 19:19:40 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:03.185 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:03.185 19:19:40 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a661aedc-0a4d-4c20-8ea0-c101e75a3816 --l2p_dram_limit 10 -c nvc0n1p0 00:19:03.443 [2024-02-14 19:19:40.654017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.443 [2024-02-14 19:19:40.654076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:03.443 [2024-02-14 19:19:40.654101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:03.443 [2024-02-14 19:19:40.654114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.443 [2024-02-14 19:19:40.654195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.443 [2024-02-14 19:19:40.654214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:03.443 [2024-02-14 19:19:40.654230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:03.443 [2024-02-14 19:19:40.654241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.443 [2024-02-14 19:19:40.654274] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:03.443 [2024-02-14 19:19:40.655357] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:03.443 [2024-02-14 19:19:40.655401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.443 [2024-02-14 19:19:40.655416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:03.443 [2024-02-14 19:19:40.655431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:19:03.443 [2024-02-14 19:19:40.655442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.443 [2024-02-14 19:19:40.655566] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4322074a-dfb9-45be-b531-8eaf623071d6 00:19:03.443 [2024-02-14 19:19:40.656546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.443 [2024-02-14 19:19:40.656593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:03.443 [2024-02-14 19:19:40.656611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:03.443 [2024-02-14 19:19:40.656624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.443 [2024-02-14 19:19:40.660850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.443 [2024-02-14 19:19:40.660897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:03.443 [2024-02-14 19:19:40.660914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.169 ms 00:19:03.443 [2024-02-14 19:19:40.660927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.443 [2024-02-14 19:19:40.661052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.443 [2024-02-14 19:19:40.661075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:03.443 [2024-02-14 19:19:40.661088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:03.443 [2024-02-14 19:19:40.661105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.443 [2024-02-14 19:19:40.661181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.443 [2024-02-14 19:19:40.661204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:03.443 [2024-02-14 19:19:40.661220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:03.443 [2024-02-14 19:19:40.661234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.443 [2024-02-14 19:19:40.661267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:03.443 [2024-02-14 19:19:40.665780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.443 [2024-02-14 19:19:40.665818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:03.443 [2024-02-14 19:19:40.665837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.517 ms 00:19:03.443 [2024-02-14 19:19:40.665849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.443 [2024-02-14 19:19:40.665899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.443 [2024-02-14 19:19:40.665915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:03.443 [2024-02-14 19:19:40.665930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:03.443 [2024-02-14 19:19:40.665941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.444 [2024-02-14 19:19:40.666001] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:03.444 [2024-02-14 19:19:40.666136] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:03.444 [2024-02-14 19:19:40.666159] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:03.444 [2024-02-14 19:19:40.666175] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:03.444 [2024-02-14 19:19:40.666192] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:03.444 [2024-02-14 19:19:40.666206] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:03.444 [2024-02-14 19:19:40.666220] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:03.444 [2024-02-14 19:19:40.666233] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:03.444 [2024-02-14 19:19:40.666246] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:03.444 [2024-02-14 19:19:40.666257] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:03.444 [2024-02-14 19:19:40.666286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.444 [2024-02-14 19:19:40.666298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:03.444 [2024-02-14 19:19:40.666312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:19:03.444 [2024-02-14 19:19:40.666323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.444 [2024-02-14 19:19:40.666399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.444 [2024-02-14 19:19:40.666415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:03.444 [2024-02-14 19:19:40.666428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:03.444 [2024-02-14 19:19:40.666439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.444 [2024-02-14 19:19:40.666552] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:03.444 [2024-02-14 19:19:40.666578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:03.444 [2024-02-14 19:19:40.666614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:03.444 [2024-02-14 19:19:40.666635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.444 [2024-02-14 19:19:40.666659] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:03.444 [2024-02-14 19:19:40.666678] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:03.444 [2024-02-14 19:19:40.666701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:03.444 [2024-02-14 19:19:40.666720] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:03.444 [2024-02-14 19:19:40.666745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:03.444 [2024-02-14 19:19:40.666765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:03.444 [2024-02-14 19:19:40.666790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:03.444 [2024-02-14 19:19:40.666810] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:03.444 [2024-02-14 19:19:40.666834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:03.444 [2024-02-14 19:19:40.666855] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:03.444 [2024-02-14 19:19:40.666879] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:03.444 [2024-02-14 19:19:40.666899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.444 [2024-02-14 19:19:40.666954] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:03.444 [2024-02-14 19:19:40.666981] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:03.444 [2024-02-14 19:19:40.666995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.444 [2024-02-14 19:19:40.667006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:03.444 [2024-02-14 19:19:40.667019] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:03.444 [2024-02-14 19:19:40.667030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:03.444 [2024-02-14 19:19:40.667043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:03.444 [2024-02-14 19:19:40.667053] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:03.444 [2024-02-14 19:19:40.667065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:03.444 [2024-02-14 19:19:40.667080] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:03.444 [2024-02-14 19:19:40.667092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:03.444 [2024-02-14 19:19:40.667102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:03.444 [2024-02-14 19:19:40.667114] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:03.444 [2024-02-14 19:19:40.667125] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:03.444 [2024-02-14 19:19:40.667137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:03.444 [2024-02-14 19:19:40.667147] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:03.444 [2024-02-14 19:19:40.667161] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:03.444 [2024-02-14 19:19:40.667171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:03.444 [2024-02-14 19:19:40.667183] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:03.444 [2024-02-14 19:19:40.667193] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:03.444 [2024-02-14 19:19:40.667205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:03.444 [2024-02-14 19:19:40.667215] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:03.444 [2024-02-14 19:19:40.667227] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:03.444 [2024-02-14 19:19:40.667238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:03.444 [2024-02-14 19:19:40.667250] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:03.444 [2024-02-14 19:19:40.667262] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:03.444 [2024-02-14 19:19:40.667277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:03.444 [2024-02-14 19:19:40.667288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.444 [2024-02-14 19:19:40.667302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:03.444 [2024-02-14 19:19:40.667313] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:03.444 [2024-02-14 19:19:40.667325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:03.444 [2024-02-14 19:19:40.667335] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:03.444 [2024-02-14 19:19:40.667349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:03.444 [2024-02-14 19:19:40.667360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:03.444 [2024-02-14 19:19:40.667377] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:03.444 [2024-02-14 19:19:40.667394] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:03.444 [2024-02-14 19:19:40.667410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:03.444 [2024-02-14 19:19:40.667422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:03.444 [2024-02-14 19:19:40.667435] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:03.444 [2024-02-14 19:19:40.667446] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:03.444 [2024-02-14 19:19:40.667460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:03.444 [2024-02-14 19:19:40.667473] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:03.444 [2024-02-14 19:19:40.667501] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:03.444 [2024-02-14 19:19:40.667515] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:03.444 [2024-02-14 19:19:40.667529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:03.444 [2024-02-14 19:19:40.667540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:03.444 [2024-02-14 19:19:40.667553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:03.444 [2024-02-14 19:19:40.667565] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:03.444 [2024-02-14 19:19:40.667582] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:03.444 [2024-02-14 19:19:40.667594] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:03.444 [2024-02-14 19:19:40.667608] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:03.444 [2024-02-14 19:19:40.667620] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:03.444 [2024-02-14 19:19:40.667634] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:03.444 [2024-02-14 19:19:40.667645] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:03.444 [2024-02-14 19:19:40.667658] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:03.445 [2024-02-14 19:19:40.667672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.445 [2024-02-14 19:19:40.667685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:03.445 [2024-02-14 19:19:40.667697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.191 ms 00:19:03.445 [2024-02-14 19:19:40.667710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.445 [2024-02-14 19:19:40.685977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.445 [2024-02-14 19:19:40.686150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:03.445 [2024-02-14 19:19:40.686269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.205 ms 00:19:03.445 [2024-02-14 19:19:40.686327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.445 [2024-02-14 19:19:40.686582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.445 [2024-02-14 19:19:40.686691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:03.445 [2024-02-14 19:19:40.686805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:03.445 [2024-02-14 19:19:40.686951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.445 [2024-02-14 19:19:40.725337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.445 [2024-02-14 19:19:40.725554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:03.445 [2024-02-14 19:19:40.725686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.268 ms 00:19:03.445 [2024-02-14 19:19:40.725745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.445 [2024-02-14 19:19:40.725895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.445 [2024-02-14 19:19:40.725955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:03.445 [2024-02-14 19:19:40.726068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:03.445 [2024-02-14 19:19:40.726122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.445 [2024-02-14 19:19:40.726554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.445 [2024-02-14 19:19:40.726696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:03.445 [2024-02-14 19:19:40.726806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:19:03.445 [2024-02-14 19:19:40.726936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.445 [2024-02-14 19:19:40.727120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.445 [2024-02-14 19:19:40.727159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:03.445 [2024-02-14 19:19:40.727175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:19:03.445 [2024-02-14 19:19:40.727188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.445 [2024-02-14 19:19:40.745033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.445 [2024-02-14 19:19:40.745202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:03.445 [2024-02-14 19:19:40.745320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.818 ms 00:19:03.445 [2024-02-14 19:19:40.745375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.445 [2024-02-14 19:19:40.758884] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:03.445 [2024-02-14 19:19:40.761700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.445 [2024-02-14 19:19:40.761842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:03.445 [2024-02-14 19:19:40.761961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.082 ms 00:19:03.445 [2024-02-14 19:19:40.762065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.445 [2024-02-14 19:19:40.820874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.445 [2024-02-14 19:19:40.821107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:03.445 [2024-02-14 19:19:40.821232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.727 ms 00:19:03.445 [2024-02-14 19:19:40.821285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.445 [2024-02-14 19:19:40.821424] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:19:03.445 [2024-02-14 19:19:40.821598] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:05.974 [2024-02-14 19:19:42.941895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.974 [2024-02-14 19:19:42.942309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:05.974 [2024-02-14 19:19:42.942355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2120.487 ms 00:19:05.974 [2024-02-14 19:19:42.942368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.974 [2024-02-14 19:19:42.942669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.974 [2024-02-14 19:19:42.942689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:05.974 [2024-02-14 19:19:42.942708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:19:05.974 [2024-02-14 19:19:42.942719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.974 [2024-02-14 19:19:42.971754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.974 [2024-02-14 19:19:42.971813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:05.974 [2024-02-14 19:19:42.971848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.941 ms 00:19:05.974 [2024-02-14 19:19:42.971859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.974 [2024-02-14 19:19:42.998227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.974 [2024-02-14 19:19:42.998300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:05.974 [2024-02-14 19:19:42.998340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.303 ms 00:19:05.974 [2024-02-14 19:19:42.998351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.974 [2024-02-14 19:19:42.998803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.974 [2024-02-14 19:19:42.998825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:05.974 [2024-02-14 19:19:42.998839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:19:05.974 [2024-02-14 19:19:42.998850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.974 [2024-02-14 19:19:43.070115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.974 [2024-02-14 19:19:43.070211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:05.974 [2024-02-14 19:19:43.070246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.142 ms 00:19:05.974 [2024-02-14 19:19:43.070256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.974 [2024-02-14 19:19:43.100630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.974 [2024-02-14 19:19:43.100667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:05.974 [2024-02-14 19:19:43.100703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.340 ms 00:19:05.974 [2024-02-14 19:19:43.100714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.974 [2024-02-14 19:19:43.102737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.974 [2024-02-14 19:19:43.102770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:05.974 [2024-02-14 19:19:43.102787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.978 ms 00:19:05.974 [2024-02-14 19:19:43.102797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.974 [2024-02-14 19:19:43.131246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.974 [2024-02-14 19:19:43.131295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:05.974 [2024-02-14 19:19:43.131330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.356 ms 00:19:05.974 [2024-02-14 19:19:43.131356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.974 [2024-02-14 19:19:43.131397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.974 [2024-02-14 19:19:43.131413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:05.974 [2024-02-14 19:19:43.131426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:05.974 [2024-02-14 19:19:43.131435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.974 [2024-02-14 19:19:43.131573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.974 [2024-02-14 19:19:43.131594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:05.974 [2024-02-14 19:19:43.131608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:05.974 [2024-02-14 19:19:43.131617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.974 [2024-02-14 19:19:43.132742] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2478.202 ms, result 0 00:19:05.974 { 00:19:05.974 "name": "ftl0", 00:19:05.974 "uuid": "4322074a-dfb9-45be-b531-8eaf623071d6" 00:19:05.974 } 00:19:05.974 19:19:43 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:05.974 19:19:43 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:06.234 19:19:43 -- ftl/restore.sh@63 -- # echo ']}' 00:19:06.234 19:19:43 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:06.494 [2024-02-14 19:19:43.708392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.494 [2024-02-14 19:19:43.708476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:06.494 [2024-02-14 19:19:43.708495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:06.494 [2024-02-14 19:19:43.708540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.494 [2024-02-14 19:19:43.708578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:06.494 [2024-02-14 19:19:43.711632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.494 [2024-02-14 19:19:43.711664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:06.494 [2024-02-14 19:19:43.711680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.029 ms 00:19:06.494 [2024-02-14 19:19:43.711690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.494 [2024-02-14 19:19:43.711986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.494 [2024-02-14 19:19:43.712039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:06.494 [2024-02-14 19:19:43.712081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:19:06.494 [2024-02-14 19:19:43.712092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.494 [2024-02-14 19:19:43.714957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.494 [2024-02-14 19:19:43.714984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:06.494 [2024-02-14 19:19:43.715016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:19:06.494 [2024-02-14 19:19:43.715026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.494 [2024-02-14 19:19:43.720599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.494 [2024-02-14 19:19:43.720628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:06.494 [2024-02-14 19:19:43.720662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.546 ms 00:19:06.494 [2024-02-14 19:19:43.720682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.494 [2024-02-14 19:19:43.747509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.494 [2024-02-14 19:19:43.747543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:06.494 [2024-02-14 19:19:43.747578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.747 ms 00:19:06.494 [2024-02-14 19:19:43.747604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.494 [2024-02-14 19:19:43.763650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.494 [2024-02-14 19:19:43.763685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:06.494 [2024-02-14 19:19:43.763720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.000 ms 00:19:06.494 [2024-02-14 19:19:43.763731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.494 [2024-02-14 19:19:43.763900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.494 [2024-02-14 19:19:43.763924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:06.494 [2024-02-14 19:19:43.763939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:19:06.494 [2024-02-14 19:19:43.763949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.494 [2024-02-14 19:19:43.789787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.494 [2024-02-14 19:19:43.789822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:06.494 [2024-02-14 19:19:43.789839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.806 ms 00:19:06.494 [2024-02-14 19:19:43.789849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.494 [2024-02-14 19:19:43.815556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.494 [2024-02-14 19:19:43.815588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:06.494 [2024-02-14 19:19:43.815621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.660 ms 00:19:06.494 [2024-02-14 19:19:43.815630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.494 [2024-02-14 19:19:43.842771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.494 [2024-02-14 19:19:43.842811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:06.494 [2024-02-14 19:19:43.842845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.096 ms 00:19:06.494 [2024-02-14 19:19:43.842856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.494 [2024-02-14 19:19:43.871683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.494 [2024-02-14 19:19:43.871720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:06.494 [2024-02-14 19:19:43.871754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.712 ms 00:19:06.494 [2024-02-14 19:19:43.871765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.494 [2024-02-14 19:19:43.871815] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:06.494 [2024-02-14 19:19:43.871853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.871873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.871884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.871897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.871908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.871931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.871941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.871954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.871965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.871978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.871988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.872001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.872012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.872025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.872036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.872050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.872061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.872076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.872087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.872099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.872110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.872123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:06.494 [2024-02-14 19:19:43.872134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.872991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:06.495 [2024-02-14 19:19:43.873187] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:06.495 [2024-02-14 19:19:43.873215] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4322074a-dfb9-45be-b531-8eaf623071d6 00:19:06.495 [2024-02-14 19:19:43.873227] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:06.495 [2024-02-14 19:19:43.873240] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:06.495 [2024-02-14 19:19:43.873250] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:06.495 [2024-02-14 19:19:43.873263] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:06.495 [2024-02-14 19:19:43.873273] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:06.495 [2024-02-14 19:19:43.873286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:06.495 [2024-02-14 19:19:43.873296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:06.495 [2024-02-14 19:19:43.873307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:06.495 [2024-02-14 19:19:43.873317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:06.495 [2024-02-14 19:19:43.873331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.495 [2024-02-14 19:19:43.873342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:06.495 [2024-02-14 19:19:43.873355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.521 ms 00:19:06.495 [2024-02-14 19:19:43.873368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.495 [2024-02-14 19:19:43.889095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.496 [2024-02-14 19:19:43.889130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:06.496 [2024-02-14 19:19:43.889165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.660 ms 00:19:06.496 [2024-02-14 19:19:43.889177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.496 [2024-02-14 19:19:43.889409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.496 [2024-02-14 19:19:43.889429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:06.496 [2024-02-14 19:19:43.889446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:19:06.496 [2024-02-14 19:19:43.889456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:43.944429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.755 [2024-02-14 19:19:43.944474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:06.755 [2024-02-14 19:19:43.944520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.755 [2024-02-14 19:19:43.944533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:43.944618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.755 [2024-02-14 19:19:43.944634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:06.755 [2024-02-14 19:19:43.944650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.755 [2024-02-14 19:19:43.944661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:43.944769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.755 [2024-02-14 19:19:43.944789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:06.755 [2024-02-14 19:19:43.944803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.755 [2024-02-14 19:19:43.944813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:43.944875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.755 [2024-02-14 19:19:43.944889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:06.755 [2024-02-14 19:19:43.944903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.755 [2024-02-14 19:19:43.944916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:44.039182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.755 [2024-02-14 19:19:44.039252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:06.755 [2024-02-14 19:19:44.039288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.755 [2024-02-14 19:19:44.039300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:44.075790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.755 [2024-02-14 19:19:44.075844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:06.755 [2024-02-14 19:19:44.075898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.755 [2024-02-14 19:19:44.075910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:44.076008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.755 [2024-02-14 19:19:44.076027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:06.755 [2024-02-14 19:19:44.076041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.755 [2024-02-14 19:19:44.076052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:44.076119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.755 [2024-02-14 19:19:44.076137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:06.755 [2024-02-14 19:19:44.076151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.755 [2024-02-14 19:19:44.076162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:44.076315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.755 [2024-02-14 19:19:44.076333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:06.755 [2024-02-14 19:19:44.076346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.755 [2024-02-14 19:19:44.076357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:44.076412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.755 [2024-02-14 19:19:44.076429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:06.755 [2024-02-14 19:19:44.076442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.755 [2024-02-14 19:19:44.076452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:44.076502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.755 [2024-02-14 19:19:44.076517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:06.755 [2024-02-14 19:19:44.076530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.755 [2024-02-14 19:19:44.076540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:44.076626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.755 [2024-02-14 19:19:44.076643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:06.755 [2024-02-14 19:19:44.076656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.755 [2024-02-14 19:19:44.076667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.755 [2024-02-14 19:19:44.076873] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.404 ms, result 0 00:19:06.755 true 00:19:06.755 19:19:44 -- ftl/restore.sh@66 -- # killprocess 74302 00:19:06.755 19:19:44 -- common/autotest_common.sh@924 -- # '[' -z 74302 ']' 00:19:06.755 19:19:44 -- common/autotest_common.sh@928 -- # kill -0 74302 00:19:06.755 19:19:44 -- common/autotest_common.sh@929 -- # uname 00:19:06.755 19:19:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:06.755 19:19:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 74302 00:19:06.755 killing process with pid 74302 00:19:06.755 19:19:44 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:19:06.755 19:19:44 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:19:06.755 19:19:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 74302' 00:19:06.755 19:19:44 -- common/autotest_common.sh@943 -- # kill 74302 00:19:06.755 19:19:44 -- common/autotest_common.sh@948 -- # wait 74302 00:19:12.025 19:19:48 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:16.233 262144+0 records in 00:19:16.233 262144+0 records out 00:19:16.233 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.45015 s, 241 MB/s 00:19:16.233 19:19:53 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:18.136 19:19:55 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:18.136 [2024-02-14 19:19:55.269546] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:18.136 [2024-02-14 19:19:55.269741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74563 ] 00:19:18.136 [2024-02-14 19:19:55.439683] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.394 [2024-02-14 19:19:55.647614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.394 [2024-02-14 19:19:55.647981] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:19:18.653 [2024-02-14 19:19:55.930844] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:18.653 [2024-02-14 19:19:55.931171] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:18.913 [2024-02-14 19:19:56.081832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.913 [2024-02-14 19:19:56.081885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:18.913 [2024-02-14 19:19:56.081910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:18.913 [2024-02-14 19:19:56.081921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.913 [2024-02-14 19:19:56.081986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.913 [2024-02-14 19:19:56.082005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:18.913 [2024-02-14 19:19:56.082031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:18.913 [2024-02-14 19:19:56.082041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.913 [2024-02-14 19:19:56.082083] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:18.913 [2024-02-14 19:19:56.083079] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:18.913 [2024-02-14 19:19:56.083119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.913 [2024-02-14 19:19:56.083133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:18.913 [2024-02-14 19:19:56.083145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:19:18.913 [2024-02-14 19:19:56.083159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.913 [2024-02-14 19:19:56.084402] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:18.913 [2024-02-14 19:19:56.099490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.913 [2024-02-14 19:19:56.099555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:18.913 [2024-02-14 19:19:56.099586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.104 ms 00:19:18.913 [2024-02-14 19:19:56.099597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.913 [2024-02-14 19:19:56.099676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.913 [2024-02-14 19:19:56.099695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:18.913 [2024-02-14 19:19:56.099712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:18.913 [2024-02-14 19:19:56.099722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.913 [2024-02-14 19:19:56.104217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.913 [2024-02-14 19:19:56.104254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:18.913 [2024-02-14 19:19:56.104284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.407 ms 00:19:18.913 [2024-02-14 19:19:56.104294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.913 [2024-02-14 19:19:56.104392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.913 [2024-02-14 19:19:56.104410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:18.913 [2024-02-14 19:19:56.104422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:18.913 [2024-02-14 19:19:56.104435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.913 [2024-02-14 19:19:56.104485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.913 [2024-02-14 19:19:56.104521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:18.913 [2024-02-14 19:19:56.104552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:18.913 [2024-02-14 19:19:56.104562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.913 [2024-02-14 19:19:56.104604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:18.913 [2024-02-14 19:19:56.108765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.913 [2024-02-14 19:19:56.108799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:18.913 [2024-02-14 19:19:56.108828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.169 ms 00:19:18.913 [2024-02-14 19:19:56.108855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.913 [2024-02-14 19:19:56.108901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.913 [2024-02-14 19:19:56.108931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:18.913 [2024-02-14 19:19:56.108943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:18.913 [2024-02-14 19:19:56.108957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.913 [2024-02-14 19:19:56.108996] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:18.913 [2024-02-14 19:19:56.109024] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:18.913 [2024-02-14 19:19:56.109061] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:18.913 [2024-02-14 19:19:56.109079] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:18.913 [2024-02-14 19:19:56.109154] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:18.913 [2024-02-14 19:19:56.109168] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:18.913 [2024-02-14 19:19:56.109184] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:18.913 [2024-02-14 19:19:56.109198] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:18.913 [2024-02-14 19:19:56.109240] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:18.913 [2024-02-14 19:19:56.109250] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:18.913 [2024-02-14 19:19:56.109259] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:18.913 [2024-02-14 19:19:56.109268] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:18.913 [2024-02-14 19:19:56.109277] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:18.913 [2024-02-14 19:19:56.109287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.913 [2024-02-14 19:19:56.109297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:18.913 [2024-02-14 19:19:56.109307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:19:18.913 [2024-02-14 19:19:56.109315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.913 [2024-02-14 19:19:56.109383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.913 [2024-02-14 19:19:56.109397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:18.913 [2024-02-14 19:19:56.109408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:18.913 [2024-02-14 19:19:56.109417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.913 [2024-02-14 19:19:56.109487] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:18.913 [2024-02-14 19:19:56.109501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:18.913 [2024-02-14 19:19:56.109512] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:18.913 [2024-02-14 19:19:56.109522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.913 [2024-02-14 19:19:56.109532] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:18.913 [2024-02-14 19:19:56.109541] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:18.913 [2024-02-14 19:19:56.109550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:18.913 [2024-02-14 19:19:56.109579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:18.913 [2024-02-14 19:19:56.109609] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:18.913 [2024-02-14 19:19:56.109619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:18.913 [2024-02-14 19:19:56.109628] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:18.913 [2024-02-14 19:19:56.109637] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:18.913 [2024-02-14 19:19:56.109651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:18.913 [2024-02-14 19:19:56.109706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:18.913 [2024-02-14 19:19:56.109720] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:18.913 [2024-02-14 19:19:56.109729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.913 [2024-02-14 19:19:56.109739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:18.913 [2024-02-14 19:19:56.109748] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:18.913 [2024-02-14 19:19:56.109758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.913 [2024-02-14 19:19:56.109768] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:18.913 [2024-02-14 19:19:56.109792] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:18.913 [2024-02-14 19:19:56.109802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:18.913 [2024-02-14 19:19:56.109812] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:18.913 [2024-02-14 19:19:56.109822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:18.913 [2024-02-14 19:19:56.109832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:18.913 [2024-02-14 19:19:56.109842] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:18.913 [2024-02-14 19:19:56.109851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:18.913 [2024-02-14 19:19:56.109861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:18.913 [2024-02-14 19:19:56.109871] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:18.913 [2024-02-14 19:19:56.109880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:18.914 [2024-02-14 19:19:56.109890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:18.914 [2024-02-14 19:19:56.109899] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:18.914 [2024-02-14 19:19:56.109909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:18.914 [2024-02-14 19:19:56.109918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:18.914 [2024-02-14 19:19:56.109928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:18.914 [2024-02-14 19:19:56.109937] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:18.914 [2024-02-14 19:19:56.109947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:18.914 [2024-02-14 19:19:56.109956] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:18.914 [2024-02-14 19:19:56.109966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:18.914 [2024-02-14 19:19:56.109975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:18.914 [2024-02-14 19:19:56.109985] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:18.914 [2024-02-14 19:19:56.110000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:18.914 [2024-02-14 19:19:56.110010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:18.914 [2024-02-14 19:19:56.110020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.914 [2024-02-14 19:19:56.110031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:18.914 [2024-02-14 19:19:56.110043] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:18.914 [2024-02-14 19:19:56.110052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:18.914 [2024-02-14 19:19:56.110063] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:18.914 [2024-02-14 19:19:56.110072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:18.914 [2024-02-14 19:19:56.110082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:18.914 [2024-02-14 19:19:56.110093] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:18.914 [2024-02-14 19:19:56.110106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:18.914 [2024-02-14 19:19:56.110118] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:18.914 [2024-02-14 19:19:56.110129] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:18.914 [2024-02-14 19:19:56.110140] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:18.914 [2024-02-14 19:19:56.110151] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:18.914 [2024-02-14 19:19:56.110177] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:18.914 [2024-02-14 19:19:56.110187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:18.914 [2024-02-14 19:19:56.110198] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:18.914 [2024-02-14 19:19:56.110208] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:18.914 [2024-02-14 19:19:56.110219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:18.914 [2024-02-14 19:19:56.110229] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:18.914 [2024-02-14 19:19:56.110240] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:18.914 [2024-02-14 19:19:56.110250] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:18.914 [2024-02-14 19:19:56.110261] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:18.914 [2024-02-14 19:19:56.110271] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:18.914 [2024-02-14 19:19:56.110283] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:18.914 [2024-02-14 19:19:56.110294] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:18.914 [2024-02-14 19:19:56.110305] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:18.914 [2024-02-14 19:19:56.110315] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:18.914 [2024-02-14 19:19:56.110326] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:18.914 [2024-02-14 19:19:56.110337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.110348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:18.914 [2024-02-14 19:19:56.110358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:19:18.914 [2024-02-14 19:19:56.110371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.914 [2024-02-14 19:19:56.127190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.127232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:18.914 [2024-02-14 19:19:56.127264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.731 ms 00:19:18.914 [2024-02-14 19:19:56.127279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.914 [2024-02-14 19:19:56.127367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.127381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:18.914 [2024-02-14 19:19:56.127392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:18.914 [2024-02-14 19:19:56.127402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.914 [2024-02-14 19:19:56.175263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.175313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:18.914 [2024-02-14 19:19:56.175346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.802 ms 00:19:18.914 [2024-02-14 19:19:56.175356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.914 [2024-02-14 19:19:56.175413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.175428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:18.914 [2024-02-14 19:19:56.175439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:18.914 [2024-02-14 19:19:56.175449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.914 [2024-02-14 19:19:56.175869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.175889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:18.914 [2024-02-14 19:19:56.175929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:18.914 [2024-02-14 19:19:56.175940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.914 [2024-02-14 19:19:56.176095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.176119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:18.914 [2024-02-14 19:19:56.176131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:19:18.914 [2024-02-14 19:19:56.176141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.914 [2024-02-14 19:19:56.192144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.192196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.914 [2024-02-14 19:19:56.192243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.977 ms 00:19:18.914 [2024-02-14 19:19:56.192253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.914 [2024-02-14 19:19:56.207940] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:18.914 [2024-02-14 19:19:56.207981] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:18.914 [2024-02-14 19:19:56.208013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.208024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:18.914 [2024-02-14 19:19:56.208035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.645 ms 00:19:18.914 [2024-02-14 19:19:56.208045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.914 [2024-02-14 19:19:56.235843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.235883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:18.914 [2024-02-14 19:19:56.235915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.755 ms 00:19:18.914 [2024-02-14 19:19:56.235925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.914 [2024-02-14 19:19:56.250582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.250645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:18.914 [2024-02-14 19:19:56.250661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.605 ms 00:19:18.914 [2024-02-14 19:19:56.250670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.914 [2024-02-14 19:19:56.265047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.265082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:18.914 [2024-02-14 19:19:56.265112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.338 ms 00:19:18.914 [2024-02-14 19:19:56.265121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.914 [2024-02-14 19:19:56.265576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.914 [2024-02-14 19:19:56.265602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:18.914 [2024-02-14 19:19:56.265615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:19:18.914 [2024-02-14 19:19:56.265625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.173 [2024-02-14 19:19:56.341698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.173 [2024-02-14 19:19:56.341758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:19.173 [2024-02-14 19:19:56.341778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.032 ms 00:19:19.173 [2024-02-14 19:19:56.341789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.173 [2024-02-14 19:19:56.354529] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:19.173 [2024-02-14 19:19:56.357072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.173 [2024-02-14 19:19:56.357105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:19.173 [2024-02-14 19:19:56.357123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.220 ms 00:19:19.173 [2024-02-14 19:19:56.357139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.173 [2024-02-14 19:19:56.357256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.173 [2024-02-14 19:19:56.357274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:19.173 [2024-02-14 19:19:56.357286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:19.173 [2024-02-14 19:19:56.357296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.173 [2024-02-14 19:19:56.357373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.173 [2024-02-14 19:19:56.357390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:19.173 [2024-02-14 19:19:56.357401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:19.173 [2024-02-14 19:19:56.357426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.173 [2024-02-14 19:19:56.359290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.173 [2024-02-14 19:19:56.359325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:19.173 [2024-02-14 19:19:56.359355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:19:19.173 [2024-02-14 19:19:56.359365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.173 [2024-02-14 19:19:56.359404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.173 [2024-02-14 19:19:56.359419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:19.173 [2024-02-14 19:19:56.359431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:19.173 [2024-02-14 19:19:56.359441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.173 [2024-02-14 19:19:56.359480] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:19.173 [2024-02-14 19:19:56.359496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.173 [2024-02-14 19:19:56.359530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:19.173 [2024-02-14 19:19:56.359542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:19.173 [2024-02-14 19:19:56.359553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.173 [2024-02-14 19:19:56.390089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.173 [2024-02-14 19:19:56.390262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:19.173 [2024-02-14 19:19:56.390397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.496 ms 00:19:19.173 [2024-02-14 19:19:56.390447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.173 [2024-02-14 19:19:56.390638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.173 [2024-02-14 19:19:56.390756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:19.173 [2024-02-14 19:19:56.390872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:19.173 [2024-02-14 19:19:56.390923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.173 [2024-02-14 19:19:56.392156] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.783 ms, result 0 00:20:00.505  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (25 MBps) Copying: 76/1024 [MB] (26 MBps) Copying: 100/1024 [MB] (23 MBps) Copying: 125/1024 [MB] (25 MBps) Copying: 149/1024 [MB] (24 MBps) Copying: 173/1024 [MB] (24 MBps) Copying: 198/1024 [MB] (25 MBps) Copying: 223/1024 [MB] (25 MBps) Copying: 249/1024 [MB] (25 MBps) Copying: 275/1024 [MB] (25 MBps) Copying: 300/1024 [MB] (25 MBps) Copying: 325/1024 [MB] (24 MBps) Copying: 350/1024 [MB] (25 MBps) Copying: 376/1024 [MB] (25 MBps) Copying: 402/1024 [MB] (25 MBps) Copying: 428/1024 [MB] (26 MBps) Copying: 453/1024 [MB] (24 MBps) Copying: 476/1024 [MB] (23 MBps) Copying: 501/1024 [MB] (24 MBps) Copying: 525/1024 [MB] (24 MBps) Copying: 550/1024 [MB] (24 MBps) Copying: 574/1024 [MB] (24 MBps) Copying: 599/1024 [MB] (24 MBps) Copying: 624/1024 [MB] (24 MBps) Copying: 648/1024 [MB] (24 MBps) Copying: 673/1024 [MB] (24 MBps) Copying: 698/1024 [MB] (24 MBps) Copying: 722/1024 [MB] (24 MBps) Copying: 746/1024 [MB] (24 MBps) Copying: 770/1024 [MB] (24 MBps) Copying: 794/1024 [MB] (23 MBps) Copying: 818/1024 [MB] (24 MBps) Copying: 841/1024 [MB] (22 MBps) Copying: 865/1024 [MB] (24 MBps) Copying: 889/1024 [MB] (24 MBps) Copying: 914/1024 [MB] (24 MBps) Copying: 938/1024 [MB] (24 MBps) Copying: 963/1024 [MB] (24 MBps) Copying: 987/1024 [MB] (24 MBps) Copying: 1011/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-02-14 19:20:37.899909] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:20:00.505 [2024-02-14 19:20:37.900116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.505 [2024-02-14 19:20:37.900139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:00.505 [2024-02-14 19:20:37.900153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:00.505 [2024-02-14 19:20:37.900162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.505 [2024-02-14 19:20:37.900192] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:00.505 [2024-02-14 19:20:37.903289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.505 [2024-02-14 19:20:37.903319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:00.505 [2024-02-14 19:20:37.903363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.079 ms 00:20:00.505 [2024-02-14 19:20:37.903372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.505 [2024-02-14 19:20:37.904904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.505 [2024-02-14 19:20:37.904969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:00.505 [2024-02-14 19:20:37.904999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.509 ms 00:20:00.505 [2024-02-14 19:20:37.905008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.505 [2024-02-14 19:20:37.919562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.505 [2024-02-14 19:20:37.919648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:00.505 [2024-02-14 19:20:37.919667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.535 ms 00:20:00.505 [2024-02-14 19:20:37.919677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.766 [2024-02-14 19:20:37.926055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.766 [2024-02-14 19:20:37.926089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:00.766 [2024-02-14 19:20:37.926103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.341 ms 00:20:00.766 [2024-02-14 19:20:37.926114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.766 [2024-02-14 19:20:37.951720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.766 [2024-02-14 19:20:37.951756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:00.766 [2024-02-14 19:20:37.951785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.489 ms 00:20:00.766 [2024-02-14 19:20:37.951794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.766 [2024-02-14 19:20:37.967199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.766 [2024-02-14 19:20:37.967232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:00.766 [2024-02-14 19:20:37.967268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.368 ms 00:20:00.766 [2024-02-14 19:20:37.967278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.766 [2024-02-14 19:20:37.967409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.766 [2024-02-14 19:20:37.967427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:00.766 [2024-02-14 19:20:37.967438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:00.766 [2024-02-14 19:20:37.967446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.766 [2024-02-14 19:20:37.992813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.766 [2024-02-14 19:20:37.992874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:00.766 [2024-02-14 19:20:37.992904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.351 ms 00:20:00.766 [2024-02-14 19:20:37.992912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.766 [2024-02-14 19:20:38.018733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.766 [2024-02-14 19:20:38.018766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:00.766 [2024-02-14 19:20:38.018795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.786 ms 00:20:00.766 [2024-02-14 19:20:38.018804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.766 [2024-02-14 19:20:38.045118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.766 [2024-02-14 19:20:38.045151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:00.766 [2024-02-14 19:20:38.045179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.278 ms 00:20:00.766 [2024-02-14 19:20:38.045188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.766 [2024-02-14 19:20:38.071065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.766 [2024-02-14 19:20:38.071098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:00.766 [2024-02-14 19:20:38.071126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.791 ms 00:20:00.766 [2024-02-14 19:20:38.071135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.766 [2024-02-14 19:20:38.071169] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:00.766 [2024-02-14 19:20:38.071189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:00.766 [2024-02-14 19:20:38.071327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.071993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:00.767 [2024-02-14 19:20:38.072284] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:00.767 [2024-02-14 19:20:38.072294] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4322074a-dfb9-45be-b531-8eaf623071d6 00:20:00.767 [2024-02-14 19:20:38.072304] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:00.767 [2024-02-14 19:20:38.072314] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:00.768 [2024-02-14 19:20:38.072323] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:00.768 [2024-02-14 19:20:38.072332] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:00.768 [2024-02-14 19:20:38.072341] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:00.768 [2024-02-14 19:20:38.072351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:00.768 [2024-02-14 19:20:38.072360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:00.768 [2024-02-14 19:20:38.072369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:00.768 [2024-02-14 19:20:38.072377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:00.768 [2024-02-14 19:20:38.072387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.768 [2024-02-14 19:20:38.072403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:00.768 [2024-02-14 19:20:38.072423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:20:00.768 [2024-02-14 19:20:38.072433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.768 [2024-02-14 19:20:38.086532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.768 [2024-02-14 19:20:38.086607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:00.768 [2024-02-14 19:20:38.086638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.062 ms 00:20:00.768 [2024-02-14 19:20:38.086648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.768 [2024-02-14 19:20:38.086894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.768 [2024-02-14 19:20:38.086909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:00.768 [2024-02-14 19:20:38.086936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:20:00.768 [2024-02-14 19:20:38.086961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.768 [2024-02-14 19:20:38.125448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.768 [2024-02-14 19:20:38.125512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.768 [2024-02-14 19:20:38.125543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.768 [2024-02-14 19:20:38.125552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.768 [2024-02-14 19:20:38.125608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.768 [2024-02-14 19:20:38.125636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.768 [2024-02-14 19:20:38.125646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.768 [2024-02-14 19:20:38.125655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.768 [2024-02-14 19:20:38.125765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.768 [2024-02-14 19:20:38.125784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.768 [2024-02-14 19:20:38.125797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.768 [2024-02-14 19:20:38.125807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.768 [2024-02-14 19:20:38.125835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.768 [2024-02-14 19:20:38.125849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.768 [2024-02-14 19:20:38.125859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.768 [2024-02-14 19:20:38.125870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.027 [2024-02-14 19:20:38.210837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.027 [2024-02-14 19:20:38.210895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.027 [2024-02-14 19:20:38.210927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.027 [2024-02-14 19:20:38.210936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.027 [2024-02-14 19:20:38.247191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.027 [2024-02-14 19:20:38.247226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.027 [2024-02-14 19:20:38.247255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.027 [2024-02-14 19:20:38.247264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.027 [2024-02-14 19:20:38.247339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.027 [2024-02-14 19:20:38.247355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.027 [2024-02-14 19:20:38.247365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.027 [2024-02-14 19:20:38.247374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.027 [2024-02-14 19:20:38.247418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.027 [2024-02-14 19:20:38.247431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.027 [2024-02-14 19:20:38.247447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.027 [2024-02-14 19:20:38.247455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.027 [2024-02-14 19:20:38.247623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.027 [2024-02-14 19:20:38.247641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.027 [2024-02-14 19:20:38.247669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.027 [2024-02-14 19:20:38.247679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.027 [2024-02-14 19:20:38.247728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.028 [2024-02-14 19:20:38.247745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:01.028 [2024-02-14 19:20:38.247762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.028 [2024-02-14 19:20:38.247788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.028 [2024-02-14 19:20:38.247832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.028 [2024-02-14 19:20:38.247875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:01.028 [2024-02-14 19:20:38.247900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.028 [2024-02-14 19:20:38.247909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.028 [2024-02-14 19:20:38.248028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.028 [2024-02-14 19:20:38.248043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:01.028 [2024-02-14 19:20:38.248059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.028 [2024-02-14 19:20:38.248068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.028 [2024-02-14 19:20:38.248193] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.044 ms, result 0 00:20:02.407 00:20:02.407 00:20:02.407 19:20:39 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:02.407 [2024-02-14 19:20:39.690600] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:20:02.407 [2024-02-14 19:20:39.690761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75011 ] 00:20:02.665 [2024-02-14 19:20:39.856827] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.665 [2024-02-14 19:20:40.013745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.665 [2024-02-14 19:20:40.013852] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:20:02.924 [2024-02-14 19:20:40.270980] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:02.924 [2024-02-14 19:20:40.271042] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:03.183 [2024-02-14 19:20:40.420688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.183 [2024-02-14 19:20:40.420731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:03.183 [2024-02-14 19:20:40.420768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:03.183 [2024-02-14 19:20:40.420778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.183 [2024-02-14 19:20:40.420835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.183 [2024-02-14 19:20:40.420851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.183 [2024-02-14 19:20:40.420861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:03.183 [2024-02-14 19:20:40.420870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.183 [2024-02-14 19:20:40.420895] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:03.183 [2024-02-14 19:20:40.421712] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:03.183 [2024-02-14 19:20:40.421761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.183 [2024-02-14 19:20:40.421774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.183 [2024-02-14 19:20:40.421786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:20:03.183 [2024-02-14 19:20:40.421801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.183 [2024-02-14 19:20:40.423050] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:03.183 [2024-02-14 19:20:40.435917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.183 [2024-02-14 19:20:40.435953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:03.183 [2024-02-14 19:20:40.435984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.868 ms 00:20:03.183 [2024-02-14 19:20:40.435994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.183 [2024-02-14 19:20:40.436052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.183 [2024-02-14 19:20:40.436068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:03.183 [2024-02-14 19:20:40.436079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:03.183 [2024-02-14 19:20:40.436087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.183 [2024-02-14 19:20:40.440219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.183 [2024-02-14 19:20:40.440253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.183 [2024-02-14 19:20:40.440282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.066 ms 00:20:03.183 [2024-02-14 19:20:40.440291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.183 [2024-02-14 19:20:40.440381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.183 [2024-02-14 19:20:40.440397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.183 [2024-02-14 19:20:40.440408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:03.183 [2024-02-14 19:20:40.440419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.183 [2024-02-14 19:20:40.440480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.183 [2024-02-14 19:20:40.440540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:03.183 [2024-02-14 19:20:40.440557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:03.183 [2024-02-14 19:20:40.440567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.183 [2024-02-14 19:20:40.440597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:03.183 [2024-02-14 19:20:40.444322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.183 [2024-02-14 19:20:40.444358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.183 [2024-02-14 19:20:40.444387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.732 ms 00:20:03.183 [2024-02-14 19:20:40.444397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.183 [2024-02-14 19:20:40.444442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.183 [2024-02-14 19:20:40.444457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:03.183 [2024-02-14 19:20:40.444469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:03.183 [2024-02-14 19:20:40.444482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.183 [2024-02-14 19:20:40.444582] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:03.184 [2024-02-14 19:20:40.444611] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:03.184 [2024-02-14 19:20:40.444647] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:03.184 [2024-02-14 19:20:40.444665] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:03.184 [2024-02-14 19:20:40.444733] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:03.184 [2024-02-14 19:20:40.444747] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:03.184 [2024-02-14 19:20:40.444763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:03.184 [2024-02-14 19:20:40.444777] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:03.184 [2024-02-14 19:20:40.444788] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:03.184 [2024-02-14 19:20:40.444814] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:03.184 [2024-02-14 19:20:40.444823] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:03.184 [2024-02-14 19:20:40.444832] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:03.184 [2024-02-14 19:20:40.444857] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:03.184 [2024-02-14 19:20:40.444868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.184 [2024-02-14 19:20:40.444892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:03.184 [2024-02-14 19:20:40.444903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:20:03.184 [2024-02-14 19:20:40.444913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.184 [2024-02-14 19:20:40.444998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.184 [2024-02-14 19:20:40.445017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:03.184 [2024-02-14 19:20:40.445028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:03.184 [2024-02-14 19:20:40.445037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.184 [2024-02-14 19:20:40.445129] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:03.184 [2024-02-14 19:20:40.445144] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:03.184 [2024-02-14 19:20:40.445156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:03.184 [2024-02-14 19:20:40.445181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.184 [2024-02-14 19:20:40.445206] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:03.184 [2024-02-14 19:20:40.445214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:03.184 [2024-02-14 19:20:40.445224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:03.184 [2024-02-14 19:20:40.445234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:03.184 [2024-02-14 19:20:40.445243] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:03.184 [2024-02-14 19:20:40.445267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:03.184 [2024-02-14 19:20:40.445277] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:03.184 [2024-02-14 19:20:40.445286] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:03.184 [2024-02-14 19:20:40.445315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:03.184 [2024-02-14 19:20:40.445327] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:03.184 [2024-02-14 19:20:40.445340] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:03.184 [2024-02-14 19:20:40.445349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.184 [2024-02-14 19:20:40.445362] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:03.184 [2024-02-14 19:20:40.445371] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:03.184 [2024-02-14 19:20:40.445395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.184 [2024-02-14 19:20:40.445403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:03.184 [2024-02-14 19:20:40.445425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:03.184 [2024-02-14 19:20:40.445440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:03.184 [2024-02-14 19:20:40.445449] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:03.184 [2024-02-14 19:20:40.445461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:03.184 [2024-02-14 19:20:40.445471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:03.184 [2024-02-14 19:20:40.445516] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:03.184 [2024-02-14 19:20:40.445528] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:03.184 [2024-02-14 19:20:40.445541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:03.184 [2024-02-14 19:20:40.445554] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:03.184 [2024-02-14 19:20:40.445584] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:03.184 [2024-02-14 19:20:40.445610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:03.184 [2024-02-14 19:20:40.445625] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:03.184 [2024-02-14 19:20:40.445634] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:03.184 [2024-02-14 19:20:40.445646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:03.184 [2024-02-14 19:20:40.445658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:03.184 [2024-02-14 19:20:40.445668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:03.184 [2024-02-14 19:20:40.445676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:03.184 [2024-02-14 19:20:40.445711] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:03.184 [2024-02-14 19:20:40.445721] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:03.184 [2024-02-14 19:20:40.445730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:03.184 [2024-02-14 19:20:40.445739] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:03.184 [2024-02-14 19:20:40.445755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:03.184 [2024-02-14 19:20:40.445765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:03.184 [2024-02-14 19:20:40.445775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.184 [2024-02-14 19:20:40.445786] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:03.184 [2024-02-14 19:20:40.445796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:03.184 [2024-02-14 19:20:40.445805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:03.184 [2024-02-14 19:20:40.445814] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:03.184 [2024-02-14 19:20:40.445823] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:03.184 [2024-02-14 19:20:40.445833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:03.184 [2024-02-14 19:20:40.445844] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:03.184 [2024-02-14 19:20:40.445856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:03.184 [2024-02-14 19:20:40.445867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:03.184 [2024-02-14 19:20:40.445878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:03.184 [2024-02-14 19:20:40.445888] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:03.184 [2024-02-14 19:20:40.445898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:03.184 [2024-02-14 19:20:40.445908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:03.184 [2024-02-14 19:20:40.445918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:03.184 [2024-02-14 19:20:40.445928] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:03.184 [2024-02-14 19:20:40.445938] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:03.184 [2024-02-14 19:20:40.445963] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:03.184 [2024-02-14 19:20:40.445972] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:03.184 [2024-02-14 19:20:40.445982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:03.184 [2024-02-14 19:20:40.445992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:03.184 [2024-02-14 19:20:40.446017] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:03.184 [2024-02-14 19:20:40.446026] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:03.184 [2024-02-14 19:20:40.446050] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:03.184 [2024-02-14 19:20:40.446060] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:03.184 [2024-02-14 19:20:40.446070] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:03.184 [2024-02-14 19:20:40.446079] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:03.184 [2024-02-14 19:20:40.446089] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:03.184 [2024-02-14 19:20:40.446099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.184 [2024-02-14 19:20:40.446108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:03.184 [2024-02-14 19:20:40.446117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:20:03.184 [2024-02-14 19:20:40.446129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.184 [2024-02-14 19:20:40.463760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.184 [2024-02-14 19:20:40.463831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.185 [2024-02-14 19:20:40.463864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.585 ms 00:20:03.185 [2024-02-14 19:20:40.463888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.185 [2024-02-14 19:20:40.463976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.185 [2024-02-14 19:20:40.463989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:03.185 [2024-02-14 19:20:40.464001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:03.185 [2024-02-14 19:20:40.464011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.185 [2024-02-14 19:20:40.504448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.185 [2024-02-14 19:20:40.504519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:03.185 [2024-02-14 19:20:40.504553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.376 ms 00:20:03.185 [2024-02-14 19:20:40.504563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.185 [2024-02-14 19:20:40.504633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.185 [2024-02-14 19:20:40.504648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.185 [2024-02-14 19:20:40.504659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:03.185 [2024-02-14 19:20:40.504668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.185 [2024-02-14 19:20:40.505076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.185 [2024-02-14 19:20:40.505100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.185 [2024-02-14 19:20:40.505117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:20:03.185 [2024-02-14 19:20:40.505127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.185 [2024-02-14 19:20:40.505272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.185 [2024-02-14 19:20:40.505309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.185 [2024-02-14 19:20:40.505322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:20:03.185 [2024-02-14 19:20:40.505331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.185 [2024-02-14 19:20:40.520698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.185 [2024-02-14 19:20:40.520733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.185 [2024-02-14 19:20:40.520764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.344 ms 00:20:03.185 [2024-02-14 19:20:40.520790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.185 [2024-02-14 19:20:40.534541] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:03.185 [2024-02-14 19:20:40.534602] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:03.185 [2024-02-14 19:20:40.534635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.185 [2024-02-14 19:20:40.534660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:03.185 [2024-02-14 19:20:40.534672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.730 ms 00:20:03.185 [2024-02-14 19:20:40.534681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.185 [2024-02-14 19:20:40.558851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.185 [2024-02-14 19:20:40.558902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:03.185 [2024-02-14 19:20:40.558934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.112 ms 00:20:03.185 [2024-02-14 19:20:40.558943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.185 [2024-02-14 19:20:40.571531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.185 [2024-02-14 19:20:40.571565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:03.185 [2024-02-14 19:20:40.571594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.541 ms 00:20:03.185 [2024-02-14 19:20:40.571604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.185 [2024-02-14 19:20:40.584002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.185 [2024-02-14 19:20:40.584038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:03.185 [2024-02-14 19:20:40.584068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.361 ms 00:20:03.185 [2024-02-14 19:20:40.584077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.185 [2024-02-14 19:20:40.584464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.185 [2024-02-14 19:20:40.584500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:03.185 [2024-02-14 19:20:40.584515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:20:03.185 [2024-02-14 19:20:40.584525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.444 [2024-02-14 19:20:40.646347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.444 [2024-02-14 19:20:40.646408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:03.444 [2024-02-14 19:20:40.646442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.801 ms 00:20:03.444 [2024-02-14 19:20:40.646451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.444 [2024-02-14 19:20:40.657252] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:03.444 [2024-02-14 19:20:40.659539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.444 [2024-02-14 19:20:40.659584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:03.444 [2024-02-14 19:20:40.659614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.006 ms 00:20:03.444 [2024-02-14 19:20:40.659624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.444 [2024-02-14 19:20:40.659709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.444 [2024-02-14 19:20:40.659725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:03.444 [2024-02-14 19:20:40.659737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:03.444 [2024-02-14 19:20:40.659746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.444 [2024-02-14 19:20:40.659814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.444 [2024-02-14 19:20:40.659829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:03.444 [2024-02-14 19:20:40.659839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:03.444 [2024-02-14 19:20:40.659852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.444 [2024-02-14 19:20:40.661548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.444 [2024-02-14 19:20:40.661758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:03.444 [2024-02-14 19:20:40.661884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.675 ms 00:20:03.444 [2024-02-14 19:20:40.661954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.444 [2024-02-14 19:20:40.662131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.444 [2024-02-14 19:20:40.662175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:03.444 [2024-02-14 19:20:40.662210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:03.444 [2024-02-14 19:20:40.662246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.444 [2024-02-14 19:20:40.662315] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:03.444 [2024-02-14 19:20:40.662416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.444 [2024-02-14 19:20:40.662454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:03.444 [2024-02-14 19:20:40.662498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:03.444 [2024-02-14 19:20:40.662556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.444 [2024-02-14 19:20:40.687895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.444 [2024-02-14 19:20:40.688070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:03.444 [2024-02-14 19:20:40.688215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.285 ms 00:20:03.444 [2024-02-14 19:20:40.688270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.444 [2024-02-14 19:20:40.688349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.444 [2024-02-14 19:20:40.688366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:03.444 [2024-02-14 19:20:40.688378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:03.444 [2024-02-14 19:20:40.688388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.444 [2024-02-14 19:20:40.689708] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 268.365 ms, result 0 00:20:46.552  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (24 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (23 MBps) Copying: 166/1024 [MB] (23 MBps) Copying: 190/1024 [MB] (23 MBps) Copying: 214/1024 [MB] (24 MBps) Copying: 239/1024 [MB] (24 MBps) Copying: 262/1024 [MB] (23 MBps) Copying: 285/1024 [MB] (23 MBps) Copying: 309/1024 [MB] (23 MBps) Copying: 333/1024 [MB] (24 MBps) Copying: 357/1024 [MB] (23 MBps) Copying: 381/1024 [MB] (24 MBps) Copying: 405/1024 [MB] (23 MBps) Copying: 429/1024 [MB] (24 MBps) Copying: 453/1024 [MB] (24 MBps) Copying: 477/1024 [MB] (24 MBps) Copying: 501/1024 [MB] (24 MBps) Copying: 525/1024 [MB] (23 MBps) Copying: 551/1024 [MB] (25 MBps) Copying: 575/1024 [MB] (23 MBps) Copying: 599/1024 [MB] (24 MBps) Copying: 624/1024 [MB] (25 MBps) Copying: 649/1024 [MB] (24 MBps) Copying: 673/1024 [MB] (23 MBps) Copying: 698/1024 [MB] (25 MBps) Copying: 722/1024 [MB] (24 MBps) Copying: 746/1024 [MB] (23 MBps) Copying: 770/1024 [MB] (24 MBps) Copying: 795/1024 [MB] (24 MBps) Copying: 818/1024 [MB] (23 MBps) Copying: 842/1024 [MB] (23 MBps) Copying: 867/1024 [MB] (24 MBps) Copying: 891/1024 [MB] (24 MBps) Copying: 915/1024 [MB] (24 MBps) Copying: 940/1024 [MB] (24 MBps) Copying: 963/1024 [MB] (23 MBps) Copying: 987/1024 [MB] (23 MBps) Copying: 1011/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-02-14 19:21:23.836346] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:20:46.552 [2024-02-14 19:21:23.836672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.552 [2024-02-14 19:21:23.836697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:46.552 [2024-02-14 19:21:23.836714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:46.552 [2024-02-14 19:21:23.836754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.552 [2024-02-14 19:21:23.836781] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:46.552 [2024-02-14 19:21:23.839926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.552 [2024-02-14 19:21:23.839957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:46.552 [2024-02-14 19:21:23.839987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:20:46.552 [2024-02-14 19:21:23.839996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.552 [2024-02-14 19:21:23.840211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.552 [2024-02-14 19:21:23.840226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:46.552 [2024-02-14 19:21:23.840250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:46.552 [2024-02-14 19:21:23.840263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.552 [2024-02-14 19:21:23.844380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.552 [2024-02-14 19:21:23.844409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:46.552 [2024-02-14 19:21:23.844438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.099 ms 00:20:46.552 [2024-02-14 19:21:23.844464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.552 [2024-02-14 19:21:23.850570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.552 [2024-02-14 19:21:23.850597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:46.552 [2024-02-14 19:21:23.850625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.085 ms 00:20:46.552 [2024-02-14 19:21:23.850634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.552 [2024-02-14 19:21:23.880466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.552 [2024-02-14 19:21:23.880547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:46.552 [2024-02-14 19:21:23.880567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.737 ms 00:20:46.552 [2024-02-14 19:21:23.880578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.552 [2024-02-14 19:21:23.899860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.552 [2024-02-14 19:21:23.899920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:46.552 [2024-02-14 19:21:23.899954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.236 ms 00:20:46.552 [2024-02-14 19:21:23.899964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.552 [2024-02-14 19:21:23.900137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.552 [2024-02-14 19:21:23.900155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:46.552 [2024-02-14 19:21:23.900167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:46.552 [2024-02-14 19:21:23.900176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.552 [2024-02-14 19:21:23.929328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.552 [2024-02-14 19:21:23.929538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:46.552 [2024-02-14 19:21:23.929670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.135 ms 00:20:46.552 [2024-02-14 19:21:23.929767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.552 [2024-02-14 19:21:23.957333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.552 [2024-02-14 19:21:23.957528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:46.552 [2024-02-14 19:21:23.957569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.436 ms 00:20:46.552 [2024-02-14 19:21:23.957595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.813 [2024-02-14 19:21:23.985923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.814 [2024-02-14 19:21:23.985960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:46.814 [2024-02-14 19:21:23.985991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.302 ms 00:20:46.814 [2024-02-14 19:21:23.986012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.814 [2024-02-14 19:21:24.013137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.814 [2024-02-14 19:21:24.013172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:46.814 [2024-02-14 19:21:24.013202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.019 ms 00:20:46.814 [2024-02-14 19:21:24.013227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.814 [2024-02-14 19:21:24.013249] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:46.814 [2024-02-14 19:21:24.013267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.013925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:46.814 [2024-02-14 19:21:24.014605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:46.815 [2024-02-14 19:21:24.014969] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:46.815 [2024-02-14 19:21:24.014979] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4322074a-dfb9-45be-b531-8eaf623071d6 00:20:46.815 [2024-02-14 19:21:24.014989] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:46.815 [2024-02-14 19:21:24.015001] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:46.815 [2024-02-14 19:21:24.015011] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:46.815 [2024-02-14 19:21:24.015021] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:46.815 [2024-02-14 19:21:24.015030] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:46.815 [2024-02-14 19:21:24.015046] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:46.815 [2024-02-14 19:21:24.015055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:46.815 [2024-02-14 19:21:24.015064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:46.815 [2024-02-14 19:21:24.015072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:46.815 [2024-02-14 19:21:24.015083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.815 [2024-02-14 19:21:24.015103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:46.815 [2024-02-14 19:21:24.015114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.834 ms 00:20:46.815 [2024-02-14 19:21:24.015124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.815 [2024-02-14 19:21:24.029862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.815 [2024-02-14 19:21:24.029896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:46.815 [2024-02-14 19:21:24.029927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.685 ms 00:20:46.815 [2024-02-14 19:21:24.029944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.815 [2024-02-14 19:21:24.030181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.815 [2024-02-14 19:21:24.030195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:46.815 [2024-02-14 19:21:24.030206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:20:46.815 [2024-02-14 19:21:24.030215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.815 [2024-02-14 19:21:24.069182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.815 [2024-02-14 19:21:24.069222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.815 [2024-02-14 19:21:24.069274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.815 [2024-02-14 19:21:24.069284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.815 [2024-02-14 19:21:24.069339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.815 [2024-02-14 19:21:24.069352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.815 [2024-02-14 19:21:24.069362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.815 [2024-02-14 19:21:24.069371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.815 [2024-02-14 19:21:24.069449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.815 [2024-02-14 19:21:24.069466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.815 [2024-02-14 19:21:24.069477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.815 [2024-02-14 19:21:24.069493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.815 [2024-02-14 19:21:24.069512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.815 [2024-02-14 19:21:24.069560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.815 [2024-02-14 19:21:24.069571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.815 [2024-02-14 19:21:24.069580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.815 [2024-02-14 19:21:24.149859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.815 [2024-02-14 19:21:24.149922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.815 [2024-02-14 19:21:24.149962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.815 [2024-02-14 19:21:24.149972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.815 [2024-02-14 19:21:24.186845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.815 [2024-02-14 19:21:24.186927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.815 [2024-02-14 19:21:24.186941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.815 [2024-02-14 19:21:24.186950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.815 [2024-02-14 19:21:24.187028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.815 [2024-02-14 19:21:24.187043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.815 [2024-02-14 19:21:24.187053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.815 [2024-02-14 19:21:24.187062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.815 [2024-02-14 19:21:24.187112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.815 [2024-02-14 19:21:24.187125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.816 [2024-02-14 19:21:24.187136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.816 [2024-02-14 19:21:24.187145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.816 [2024-02-14 19:21:24.187241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.816 [2024-02-14 19:21:24.187256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.816 [2024-02-14 19:21:24.187272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.816 [2024-02-14 19:21:24.187281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.816 [2024-02-14 19:21:24.187325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.816 [2024-02-14 19:21:24.187345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:46.816 [2024-02-14 19:21:24.187355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.816 [2024-02-14 19:21:24.187364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.816 [2024-02-14 19:21:24.187402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.816 [2024-02-14 19:21:24.187415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.816 [2024-02-14 19:21:24.187424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.816 [2024-02-14 19:21:24.187433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.816 [2024-02-14 19:21:24.187480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.816 [2024-02-14 19:21:24.187510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.816 [2024-02-14 19:21:24.187536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.816 [2024-02-14 19:21:24.187574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.816 [2024-02-14 19:21:24.187705] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 351.018 ms, result 0 00:20:47.752 00:20:47.752 00:20:47.752 19:21:25 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:50.283 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:50.283 19:21:27 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:50.283 [2024-02-14 19:21:27.343797] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:20:50.283 [2024-02-14 19:21:27.343936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75493 ] 00:20:50.283 [2024-02-14 19:21:27.505086] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.542 [2024-02-14 19:21:27.710878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.542 [2024-02-14 19:21:27.711008] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:20:50.802 [2024-02-14 19:21:27.992229] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.802 [2024-02-14 19:21:27.992335] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.802 [2024-02-14 19:21:28.144752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.802 [2024-02-14 19:21:28.144844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:50.802 [2024-02-14 19:21:28.144883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:50.802 [2024-02-14 19:21:28.144894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.802 [2024-02-14 19:21:28.144957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.802 [2024-02-14 19:21:28.144975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:50.802 [2024-02-14 19:21:28.144987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:50.803 [2024-02-14 19:21:28.144998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.803 [2024-02-14 19:21:28.145026] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:50.803 [2024-02-14 19:21:28.146081] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:50.803 [2024-02-14 19:21:28.146153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.803 [2024-02-14 19:21:28.146167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:50.803 [2024-02-14 19:21:28.146179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:20:50.803 [2024-02-14 19:21:28.146194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.803 [2024-02-14 19:21:28.147419] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:50.803 [2024-02-14 19:21:28.163562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.803 [2024-02-14 19:21:28.163620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:50.803 [2024-02-14 19:21:28.163652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.144 ms 00:20:50.803 [2024-02-14 19:21:28.163663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.803 [2024-02-14 19:21:28.163731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.803 [2024-02-14 19:21:28.163765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:50.803 [2024-02-14 19:21:28.163779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:50.803 [2024-02-14 19:21:28.163798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.803 [2024-02-14 19:21:28.168679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.803 [2024-02-14 19:21:28.168716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:50.803 [2024-02-14 19:21:28.168744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.771 ms 00:20:50.803 [2024-02-14 19:21:28.168771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.803 [2024-02-14 19:21:28.168878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.803 [2024-02-14 19:21:28.168897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:50.803 [2024-02-14 19:21:28.168908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:50.803 [2024-02-14 19:21:28.168922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.803 [2024-02-14 19:21:28.168977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.803 [2024-02-14 19:21:28.169009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:50.803 [2024-02-14 19:21:28.169021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:50.803 [2024-02-14 19:21:28.169031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.803 [2024-02-14 19:21:28.169064] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:50.803 [2024-02-14 19:21:28.173001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.803 [2024-02-14 19:21:28.173052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:50.803 [2024-02-14 19:21:28.173097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.945 ms 00:20:50.803 [2024-02-14 19:21:28.173123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.803 [2024-02-14 19:21:28.173176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.803 [2024-02-14 19:21:28.173190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:50.803 [2024-02-14 19:21:28.173201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:50.803 [2024-02-14 19:21:28.173215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.803 [2024-02-14 19:21:28.173254] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:50.803 [2024-02-14 19:21:28.173282] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:50.803 [2024-02-14 19:21:28.173335] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:50.803 [2024-02-14 19:21:28.173356] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:50.803 [2024-02-14 19:21:28.173431] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:50.803 [2024-02-14 19:21:28.173445] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:50.803 [2024-02-14 19:21:28.173462] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:50.803 [2024-02-14 19:21:28.173475] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:50.803 [2024-02-14 19:21:28.173486] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:50.803 [2024-02-14 19:21:28.173496] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:50.803 [2024-02-14 19:21:28.173520] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:50.803 [2024-02-14 19:21:28.173529] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:50.803 [2024-02-14 19:21:28.173537] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:50.803 [2024-02-14 19:21:28.173547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.803 [2024-02-14 19:21:28.173557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:50.803 [2024-02-14 19:21:28.173567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:20:50.803 [2024-02-14 19:21:28.173594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.803 [2024-02-14 19:21:28.173659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.803 [2024-02-14 19:21:28.173672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:50.803 [2024-02-14 19:21:28.173682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:50.803 [2024-02-14 19:21:28.173692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.803 [2024-02-14 19:21:28.173826] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:50.803 [2024-02-14 19:21:28.173856] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:50.803 [2024-02-14 19:21:28.173869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.803 [2024-02-14 19:21:28.173881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.803 [2024-02-14 19:21:28.173893] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:50.803 [2024-02-14 19:21:28.173902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:50.803 [2024-02-14 19:21:28.173913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:50.803 [2024-02-14 19:21:28.173923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:50.803 [2024-02-14 19:21:28.173933] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:50.803 [2024-02-14 19:21:28.173942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.803 [2024-02-14 19:21:28.173952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:50.803 [2024-02-14 19:21:28.173962] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:50.803 [2024-02-14 19:21:28.173972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.803 [2024-02-14 19:21:28.173982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:50.803 [2024-02-14 19:21:28.173993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:50.803 [2024-02-14 19:21:28.174003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.803 [2024-02-14 19:21:28.174012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:50.803 [2024-02-14 19:21:28.174023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:50.803 [2024-02-14 19:21:28.174033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.803 [2024-02-14 19:21:28.174042] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:50.803 [2024-02-14 19:21:28.174109] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:50.803 [2024-02-14 19:21:28.174133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:50.803 [2024-02-14 19:21:28.174142] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:50.803 [2024-02-14 19:21:28.174151] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:50.803 [2024-02-14 19:21:28.174173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:50.803 [2024-02-14 19:21:28.174181] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:50.803 [2024-02-14 19:21:28.174190] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:50.803 [2024-02-14 19:21:28.174198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:50.803 [2024-02-14 19:21:28.174207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:50.803 [2024-02-14 19:21:28.174215] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:50.803 [2024-02-14 19:21:28.174223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:50.803 [2024-02-14 19:21:28.174232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:50.803 [2024-02-14 19:21:28.174241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:50.803 [2024-02-14 19:21:28.174249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:50.803 [2024-02-14 19:21:28.174257] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:50.803 [2024-02-14 19:21:28.174266] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:50.803 [2024-02-14 19:21:28.174274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.803 [2024-02-14 19:21:28.174283] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:50.803 [2024-02-14 19:21:28.174291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:50.803 [2024-02-14 19:21:28.174299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.803 [2024-02-14 19:21:28.174307] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:50.803 [2024-02-14 19:21:28.174321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:50.803 [2024-02-14 19:21:28.174330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.804 [2024-02-14 19:21:28.174339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.804 [2024-02-14 19:21:28.174349] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:50.804 [2024-02-14 19:21:28.174358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:50.804 [2024-02-14 19:21:28.174368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:50.804 [2024-02-14 19:21:28.174376] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:50.804 [2024-02-14 19:21:28.174385] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:50.804 [2024-02-14 19:21:28.174394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:50.804 [2024-02-14 19:21:28.174404] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:50.804 [2024-02-14 19:21:28.174416] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.804 [2024-02-14 19:21:28.174427] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:50.804 [2024-02-14 19:21:28.174437] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:50.804 [2024-02-14 19:21:28.174447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:50.804 [2024-02-14 19:21:28.174456] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:50.804 [2024-02-14 19:21:28.174465] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:50.804 [2024-02-14 19:21:28.174475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:50.804 [2024-02-14 19:21:28.174484] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:50.804 [2024-02-14 19:21:28.174509] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:50.804 [2024-02-14 19:21:28.174519] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:50.804 [2024-02-14 19:21:28.174544] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:50.804 [2024-02-14 19:21:28.174555] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:50.804 [2024-02-14 19:21:28.174566] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:50.804 [2024-02-14 19:21:28.174576] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:50.804 [2024-02-14 19:21:28.174604] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:50.804 [2024-02-14 19:21:28.174617] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.804 [2024-02-14 19:21:28.174629] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:50.804 [2024-02-14 19:21:28.174639] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:50.804 [2024-02-14 19:21:28.174649] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:50.804 [2024-02-14 19:21:28.174659] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:50.804 [2024-02-14 19:21:28.174670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.804 [2024-02-14 19:21:28.174680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:50.804 [2024-02-14 19:21:28.174691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:20:50.804 [2024-02-14 19:21:28.174706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.804 [2024-02-14 19:21:28.191075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.804 [2024-02-14 19:21:28.191112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:50.804 [2024-02-14 19:21:28.191143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.304 ms 00:20:50.804 [2024-02-14 19:21:28.191158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.804 [2024-02-14 19:21:28.191240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.804 [2024-02-14 19:21:28.191253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:50.804 [2024-02-14 19:21:28.191263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:50.804 [2024-02-14 19:21:28.191272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.064 [2024-02-14 19:21:28.237975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.064 [2024-02-14 19:21:28.238086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:51.064 [2024-02-14 19:21:28.238118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.643 ms 00:20:51.064 [2024-02-14 19:21:28.238145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.064 [2024-02-14 19:21:28.238271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.064 [2024-02-14 19:21:28.238288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:51.064 [2024-02-14 19:21:28.238300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:51.064 [2024-02-14 19:21:28.238311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.064 [2024-02-14 19:21:28.238746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.064 [2024-02-14 19:21:28.238780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:51.064 [2024-02-14 19:21:28.238801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:20:51.065 [2024-02-14 19:21:28.238812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.238997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.239033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:51.065 [2024-02-14 19:21:28.239045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:20:51.065 [2024-02-14 19:21:28.239055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.256378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.256432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:51.065 [2024-02-14 19:21:28.256463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.282 ms 00:20:51.065 [2024-02-14 19:21:28.256473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.272827] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:51.065 [2024-02-14 19:21:28.272912] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:51.065 [2024-02-14 19:21:28.272945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.272957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:51.065 [2024-02-14 19:21:28.272969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.298 ms 00:20:51.065 [2024-02-14 19:21:28.272980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.302287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.302340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:51.065 [2024-02-14 19:21:28.302372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.259 ms 00:20:51.065 [2024-02-14 19:21:28.302382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.315828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.315879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:51.065 [2024-02-14 19:21:28.315909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.394 ms 00:20:51.065 [2024-02-14 19:21:28.315919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.330649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.330685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:51.065 [2024-02-14 19:21:28.330714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.689 ms 00:20:51.065 [2024-02-14 19:21:28.330723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.331218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.331256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:51.065 [2024-02-14 19:21:28.331269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:20:51.065 [2024-02-14 19:21:28.331279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.399703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.399789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:51.065 [2024-02-14 19:21:28.399826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.400 ms 00:20:51.065 [2024-02-14 19:21:28.399837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.411841] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:51.065 [2024-02-14 19:21:28.414214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.414262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:51.065 [2024-02-14 19:21:28.414298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.300 ms 00:20:51.065 [2024-02-14 19:21:28.414308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.414402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.414420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:51.065 [2024-02-14 19:21:28.414432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:51.065 [2024-02-14 19:21:28.414441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.414560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.414579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:51.065 [2024-02-14 19:21:28.414590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:51.065 [2024-02-14 19:21:28.414605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.416737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.416817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:51.065 [2024-02-14 19:21:28.416853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:20:51.065 [2024-02-14 19:21:28.416864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.416925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.416942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:51.065 [2024-02-14 19:21:28.416954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:51.065 [2024-02-14 19:21:28.416964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.417005] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:51.065 [2024-02-14 19:21:28.417023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.417033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:51.065 [2024-02-14 19:21:28.417075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:51.065 [2024-02-14 19:21:28.417086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.444160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.444218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:51.065 [2024-02-14 19:21:28.444249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.050 ms 00:20:51.065 [2024-02-14 19:21:28.444267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.444342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-02-14 19:21:28.444358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:51.065 [2024-02-14 19:21:28.444369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:51.065 [2024-02-14 19:21:28.444379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-02-14 19:21:28.445819] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.296 ms, result 0 00:21:34.186  Copying: 23/1024 [MB] (23 MBps) Copying: 48/1024 [MB] (24 MBps) Copying: 72/1024 [MB] (24 MBps) Copying: 97/1024 [MB] (24 MBps) Copying: 121/1024 [MB] (24 MBps) Copying: 145/1024 [MB] (24 MBps) Copying: 169/1024 [MB] (24 MBps) Copying: 194/1024 [MB] (24 MBps) Copying: 218/1024 [MB] (24 MBps) Copying: 243/1024 [MB] (24 MBps) Copying: 267/1024 [MB] (24 MBps) Copying: 291/1024 [MB] (24 MBps) Copying: 316/1024 [MB] (24 MBps) Copying: 340/1024 [MB] (24 MBps) Copying: 365/1024 [MB] (24 MBps) Copying: 390/1024 [MB] (24 MBps) Copying: 414/1024 [MB] (24 MBps) Copying: 439/1024 [MB] (24 MBps) Copying: 464/1024 [MB] (24 MBps) Copying: 489/1024 [MB] (24 MBps) Copying: 513/1024 [MB] (24 MBps) Copying: 538/1024 [MB] (24 MBps) Copying: 562/1024 [MB] (24 MBps) Copying: 586/1024 [MB] (24 MBps) Copying: 611/1024 [MB] (24 MBps) Copying: 635/1024 [MB] (24 MBps) Copying: 659/1024 [MB] (24 MBps) Copying: 683/1024 [MB] (24 MBps) Copying: 708/1024 [MB] (24 MBps) Copying: 732/1024 [MB] (24 MBps) Copying: 756/1024 [MB] (24 MBps) Copying: 781/1024 [MB] (24 MBps) Copying: 805/1024 [MB] (24 MBps) Copying: 829/1024 [MB] (24 MBps) Copying: 853/1024 [MB] (23 MBps) Copying: 878/1024 [MB] (24 MBps) Copying: 902/1024 [MB] (24 MBps) Copying: 926/1024 [MB] (24 MBps) Copying: 951/1024 [MB] (24 MBps) Copying: 976/1024 [MB] (24 MBps) Copying: 1001/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-02-14 19:22:11.379372] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:21:34.187 [2024-02-14 19:22:11.385534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.187 [2024-02-14 19:22:11.385591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:34.187 [2024-02-14 19:22:11.385611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:34.187 [2024-02-14 19:22:11.385622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.187 [2024-02-14 19:22:11.387151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:34.187 [2024-02-14 19:22:11.394361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.187 [2024-02-14 19:22:11.394396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:34.187 [2024-02-14 19:22:11.394426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.104 ms 00:21:34.187 [2024-02-14 19:22:11.394443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.187 [2024-02-14 19:22:11.404356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.187 [2024-02-14 19:22:11.404393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:34.187 [2024-02-14 19:22:11.404422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.652 ms 00:21:34.187 [2024-02-14 19:22:11.404432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.187 [2024-02-14 19:22:11.424186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.187 [2024-02-14 19:22:11.424237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:34.187 [2024-02-14 19:22:11.424268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.734 ms 00:21:34.187 [2024-02-14 19:22:11.424278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.187 [2024-02-14 19:22:11.429658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.187 [2024-02-14 19:22:11.429687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:34.187 [2024-02-14 19:22:11.429714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.337 ms 00:21:34.187 [2024-02-14 19:22:11.429731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.187 [2024-02-14 19:22:11.454283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.187 [2024-02-14 19:22:11.454320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:34.187 [2024-02-14 19:22:11.454350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.475 ms 00:21:34.187 [2024-02-14 19:22:11.454359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.187 [2024-02-14 19:22:11.468890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.187 [2024-02-14 19:22:11.468925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:34.187 [2024-02-14 19:22:11.468955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.492 ms 00:21:34.187 [2024-02-14 19:22:11.468965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.187 [2024-02-14 19:22:11.571604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.187 [2024-02-14 19:22:11.571666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:34.187 [2024-02-14 19:22:11.571685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.598 ms 00:21:34.187 [2024-02-14 19:22:11.571696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.187 [2024-02-14 19:22:11.596244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.187 [2024-02-14 19:22:11.596280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:34.187 [2024-02-14 19:22:11.596310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.529 ms 00:21:34.187 [2024-02-14 19:22:11.596319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.447 [2024-02-14 19:22:11.621940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.447 [2024-02-14 19:22:11.621976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:34.447 [2024-02-14 19:22:11.621990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.583 ms 00:21:34.447 [2024-02-14 19:22:11.621998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.447 [2024-02-14 19:22:11.645748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.447 [2024-02-14 19:22:11.645801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:34.447 [2024-02-14 19:22:11.645814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.712 ms 00:21:34.447 [2024-02-14 19:22:11.645823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.447 [2024-02-14 19:22:11.669948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.447 [2024-02-14 19:22:11.670000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:34.447 [2024-02-14 19:22:11.670030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.051 ms 00:21:34.447 [2024-02-14 19:22:11.670040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.447 [2024-02-14 19:22:11.670105] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:34.447 [2024-02-14 19:22:11.670124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115968 / 261120 wr_cnt: 1 state: open 00:21:34.447 [2024-02-14 19:22:11.670135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:34.447 [2024-02-14 19:22:11.670145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:34.447 [2024-02-14 19:22:11.670154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:34.447 [2024-02-14 19:22:11.670163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.670994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.671003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.671013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.671022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.671032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.671041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.671050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.671059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.671069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.671078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.671088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:34.448 [2024-02-14 19:22:11.671097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:34.449 [2024-02-14 19:22:11.671106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:34.449 [2024-02-14 19:22:11.671116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:34.449 [2024-02-14 19:22:11.671126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:34.449 [2024-02-14 19:22:11.671136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:34.449 [2024-02-14 19:22:11.671145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:34.449 [2024-02-14 19:22:11.671155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:34.449 [2024-02-14 19:22:11.671172] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:34.449 [2024-02-14 19:22:11.671181] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4322074a-dfb9-45be-b531-8eaf623071d6 00:21:34.449 [2024-02-14 19:22:11.671196] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115968 00:21:34.449 [2024-02-14 19:22:11.671205] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116928 00:21:34.449 [2024-02-14 19:22:11.671213] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115968 00:21:34.449 [2024-02-14 19:22:11.671223] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:21:34.449 [2024-02-14 19:22:11.671232] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:34.449 [2024-02-14 19:22:11.671242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:34.449 [2024-02-14 19:22:11.671251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:34.449 [2024-02-14 19:22:11.671259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:34.449 [2024-02-14 19:22:11.671267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:34.449 [2024-02-14 19:22:11.671276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.449 [2024-02-14 19:22:11.671296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:34.449 [2024-02-14 19:22:11.671306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.173 ms 00:21:34.449 [2024-02-14 19:22:11.671315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.684760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.449 [2024-02-14 19:22:11.684808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:34.449 [2024-02-14 19:22:11.684837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.410 ms 00:21:34.449 [2024-02-14 19:22:11.684847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.685087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.449 [2024-02-14 19:22:11.685104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:34.449 [2024-02-14 19:22:11.685116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:21:34.449 [2024-02-14 19:22:11.685133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.720591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.449 [2024-02-14 19:22:11.720630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.449 [2024-02-14 19:22:11.720659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.449 [2024-02-14 19:22:11.720669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.720721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.449 [2024-02-14 19:22:11.720734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.449 [2024-02-14 19:22:11.720743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.449 [2024-02-14 19:22:11.720759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.720831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.449 [2024-02-14 19:22:11.720848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.449 [2024-02-14 19:22:11.720859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.449 [2024-02-14 19:22:11.720868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.720902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.449 [2024-02-14 19:22:11.720929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.449 [2024-02-14 19:22:11.720939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.449 [2024-02-14 19:22:11.720948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.803152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.449 [2024-02-14 19:22:11.803227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.449 [2024-02-14 19:22:11.803258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.449 [2024-02-14 19:22:11.803268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.834880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.449 [2024-02-14 19:22:11.834929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:34.449 [2024-02-14 19:22:11.834958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.449 [2024-02-14 19:22:11.834974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.835046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.449 [2024-02-14 19:22:11.835061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:34.449 [2024-02-14 19:22:11.835071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.449 [2024-02-14 19:22:11.835081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.835125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.449 [2024-02-14 19:22:11.835139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:34.449 [2024-02-14 19:22:11.835148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.449 [2024-02-14 19:22:11.835172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.835280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.449 [2024-02-14 19:22:11.835297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:34.449 [2024-02-14 19:22:11.835308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.449 [2024-02-14 19:22:11.835317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.835366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.449 [2024-02-14 19:22:11.835383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:34.449 [2024-02-14 19:22:11.835394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.449 [2024-02-14 19:22:11.835404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.835448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.449 [2024-02-14 19:22:11.835463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:34.449 [2024-02-14 19:22:11.835472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.449 [2024-02-14 19:22:11.835482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.835567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.449 [2024-02-14 19:22:11.835585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:34.449 [2024-02-14 19:22:11.835596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.449 [2024-02-14 19:22:11.835606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.449 [2024-02-14 19:22:11.835736] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 451.996 ms, result 0 00:21:36.355 00:21:36.355 00:21:36.355 19:22:13 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:36.355 [2024-02-14 19:22:13.366198] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:21:36.355 [2024-02-14 19:22:13.366362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75959 ] 00:21:36.355 [2024-02-14 19:22:13.532461] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.355 [2024-02-14 19:22:13.678081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.355 [2024-02-14 19:22:13.678173] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:21:36.614 [2024-02-14 19:22:13.925332] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:36.614 [2024-02-14 19:22:13.925407] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:36.875 [2024-02-14 19:22:14.073416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.875 [2024-02-14 19:22:14.073460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:36.875 [2024-02-14 19:22:14.073502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:36.875 [2024-02-14 19:22:14.073523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.875 [2024-02-14 19:22:14.073607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.875 [2024-02-14 19:22:14.073632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:36.875 [2024-02-14 19:22:14.073651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:36.875 [2024-02-14 19:22:14.073666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.875 [2024-02-14 19:22:14.073724] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:36.875 [2024-02-14 19:22:14.074759] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:36.875 [2024-02-14 19:22:14.074799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.875 [2024-02-14 19:22:14.074820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:36.875 [2024-02-14 19:22:14.074838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:21:36.875 [2024-02-14 19:22:14.074862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.875 [2024-02-14 19:22:14.076135] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:36.875 [2024-02-14 19:22:14.088871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.875 [2024-02-14 19:22:14.088926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:36.875 [2024-02-14 19:22:14.088950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.738 ms 00:21:36.875 [2024-02-14 19:22:14.088969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.875 [2024-02-14 19:22:14.089064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.875 [2024-02-14 19:22:14.089089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:36.875 [2024-02-14 19:22:14.089108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:36.875 [2024-02-14 19:22:14.089124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.875 [2024-02-14 19:22:14.093546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.875 [2024-02-14 19:22:14.093583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:36.875 [2024-02-14 19:22:14.093604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.294 ms 00:21:36.875 [2024-02-14 19:22:14.093621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.875 [2024-02-14 19:22:14.093779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.875 [2024-02-14 19:22:14.093839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:36.876 [2024-02-14 19:22:14.093860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:21:36.876 [2024-02-14 19:22:14.093884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.876 [2024-02-14 19:22:14.093980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.876 [2024-02-14 19:22:14.094006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:36.876 [2024-02-14 19:22:14.094027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:36.876 [2024-02-14 19:22:14.094044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.876 [2024-02-14 19:22:14.094106] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:36.876 [2024-02-14 19:22:14.097817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.876 [2024-02-14 19:22:14.097854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:36.876 [2024-02-14 19:22:14.097878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.722 ms 00:21:36.876 [2024-02-14 19:22:14.097897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.876 [2024-02-14 19:22:14.097959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.876 [2024-02-14 19:22:14.097984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:36.876 [2024-02-14 19:22:14.098019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:36.876 [2024-02-14 19:22:14.098042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.876 [2024-02-14 19:22:14.098115] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:36.876 [2024-02-14 19:22:14.098155] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:36.876 [2024-02-14 19:22:14.098207] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:36.876 [2024-02-14 19:22:14.098237] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:36.876 [2024-02-14 19:22:14.098332] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:36.876 [2024-02-14 19:22:14.098374] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:36.876 [2024-02-14 19:22:14.098406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:36.876 [2024-02-14 19:22:14.098427] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:36.876 [2024-02-14 19:22:14.098444] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:36.876 [2024-02-14 19:22:14.098458] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:36.876 [2024-02-14 19:22:14.098473] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:36.876 [2024-02-14 19:22:14.098504] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:36.876 [2024-02-14 19:22:14.098521] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:36.876 [2024-02-14 19:22:14.098540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.876 [2024-02-14 19:22:14.098554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:36.876 [2024-02-14 19:22:14.098569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:21:36.876 [2024-02-14 19:22:14.098583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.876 [2024-02-14 19:22:14.098676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.876 [2024-02-14 19:22:14.098696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:36.876 [2024-02-14 19:22:14.098711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:36.876 [2024-02-14 19:22:14.098725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.876 [2024-02-14 19:22:14.098837] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:36.876 [2024-02-14 19:22:14.098865] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:36.876 [2024-02-14 19:22:14.098888] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:36.876 [2024-02-14 19:22:14.098909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:36.876 [2024-02-14 19:22:14.098929] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:36.876 [2024-02-14 19:22:14.098948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:36.876 [2024-02-14 19:22:14.098968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:36.876 [2024-02-14 19:22:14.098987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:36.876 [2024-02-14 19:22:14.099007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:36.876 [2024-02-14 19:22:14.099026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:36.876 [2024-02-14 19:22:14.099045] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:36.876 [2024-02-14 19:22:14.099064] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:36.876 [2024-02-14 19:22:14.099082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:36.876 [2024-02-14 19:22:14.099102] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:36.876 [2024-02-14 19:22:14.099123] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:36.876 [2024-02-14 19:22:14.099142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:36.876 [2024-02-14 19:22:14.099162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:36.876 [2024-02-14 19:22:14.099181] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:36.876 [2024-02-14 19:22:14.099200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:36.876 [2024-02-14 19:22:14.099220] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:36.876 [2024-02-14 19:22:14.099267] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:36.876 [2024-02-14 19:22:14.099294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:36.876 [2024-02-14 19:22:14.099321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:36.876 [2024-02-14 19:22:14.099347] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:36.876 [2024-02-14 19:22:14.099372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:36.876 [2024-02-14 19:22:14.099398] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:36.876 [2024-02-14 19:22:14.099424] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:36.876 [2024-02-14 19:22:14.099448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:36.876 [2024-02-14 19:22:14.099471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:36.876 [2024-02-14 19:22:14.099503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:36.876 [2024-02-14 19:22:14.099523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:36.876 [2024-02-14 19:22:14.099550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:36.876 [2024-02-14 19:22:14.099573] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:36.876 [2024-02-14 19:22:14.099597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:36.876 [2024-02-14 19:22:14.099614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:36.876 [2024-02-14 19:22:14.099630] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:36.876 [2024-02-14 19:22:14.099647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:36.876 [2024-02-14 19:22:14.099661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:36.876 [2024-02-14 19:22:14.099679] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:36.877 [2024-02-14 19:22:14.099695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:36.877 [2024-02-14 19:22:14.099711] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:36.877 [2024-02-14 19:22:14.099737] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:36.877 [2024-02-14 19:22:14.099755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:36.877 [2024-02-14 19:22:14.099772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:36.877 [2024-02-14 19:22:14.099789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:36.877 [2024-02-14 19:22:14.099808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:36.877 [2024-02-14 19:22:14.099825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:36.877 [2024-02-14 19:22:14.099841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:36.877 [2024-02-14 19:22:14.099857] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:36.877 [2024-02-14 19:22:14.099873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:36.877 [2024-02-14 19:22:14.099891] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:36.877 [2024-02-14 19:22:14.099912] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:36.877 [2024-02-14 19:22:14.099931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:36.877 [2024-02-14 19:22:14.099950] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:36.877 [2024-02-14 19:22:14.099967] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:36.877 [2024-02-14 19:22:14.099984] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:36.877 [2024-02-14 19:22:14.100001] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:36.877 [2024-02-14 19:22:14.100019] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:36.877 [2024-02-14 19:22:14.100037] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:36.877 [2024-02-14 19:22:14.100054] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:36.877 [2024-02-14 19:22:14.100072] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:36.877 [2024-02-14 19:22:14.100088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:36.877 [2024-02-14 19:22:14.100106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:36.877 [2024-02-14 19:22:14.100138] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:36.877 [2024-02-14 19:22:14.100155] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:36.877 [2024-02-14 19:22:14.100171] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:36.877 [2024-02-14 19:22:14.100189] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:36.877 [2024-02-14 19:22:14.100208] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:36.877 [2024-02-14 19:22:14.100226] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:36.877 [2024-02-14 19:22:14.100242] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:36.877 [2024-02-14 19:22:14.100260] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:36.877 [2024-02-14 19:22:14.100279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.877 [2024-02-14 19:22:14.100296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:36.877 [2024-02-14 19:22:14.100314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:21:36.877 [2024-02-14 19:22:14.100335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.877 [2024-02-14 19:22:14.116104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.877 [2024-02-14 19:22:14.116157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:36.877 [2024-02-14 19:22:14.116181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.699 ms 00:21:36.877 [2024-02-14 19:22:14.116207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.877 [2024-02-14 19:22:14.116311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.877 [2024-02-14 19:22:14.116337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:36.877 [2024-02-14 19:22:14.116356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:36.877 [2024-02-14 19:22:14.116388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.877 [2024-02-14 19:22:14.156359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.877 [2024-02-14 19:22:14.156404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:36.877 [2024-02-14 19:22:14.156427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.887 ms 00:21:36.877 [2024-02-14 19:22:14.156444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.877 [2024-02-14 19:22:14.156516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.877 [2024-02-14 19:22:14.156542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:36.877 [2024-02-14 19:22:14.156562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:36.877 [2024-02-14 19:22:14.156577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.877 [2024-02-14 19:22:14.157032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.877 [2024-02-14 19:22:14.157067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:36.877 [2024-02-14 19:22:14.157097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:21:36.877 [2024-02-14 19:22:14.157115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.877 [2024-02-14 19:22:14.157304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.877 [2024-02-14 19:22:14.157345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:36.877 [2024-02-14 19:22:14.157368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:21:36.878 [2024-02-14 19:22:14.157386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.878 [2024-02-14 19:22:14.171836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.878 [2024-02-14 19:22:14.171872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:36.878 [2024-02-14 19:22:14.171896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.411 ms 00:21:36.878 [2024-02-14 19:22:14.171913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.878 [2024-02-14 19:22:14.186386] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:36.878 [2024-02-14 19:22:14.186454] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:36.878 [2024-02-14 19:22:14.186478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.878 [2024-02-14 19:22:14.186539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:36.878 [2024-02-14 19:22:14.186564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.419 ms 00:21:36.878 [2024-02-14 19:22:14.186583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.878 [2024-02-14 19:22:14.215512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.878 [2024-02-14 19:22:14.215591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:36.878 [2024-02-14 19:22:14.215619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.864 ms 00:21:36.878 [2024-02-14 19:22:14.215640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.878 [2024-02-14 19:22:14.229942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.878 [2024-02-14 19:22:14.229984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:36.878 [2024-02-14 19:22:14.230035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.229 ms 00:21:36.878 [2024-02-14 19:22:14.230067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.878 [2024-02-14 19:22:14.242701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.878 [2024-02-14 19:22:14.242753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:36.878 [2024-02-14 19:22:14.242776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.566 ms 00:21:36.878 [2024-02-14 19:22:14.242793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.878 [2024-02-14 19:22:14.243302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.878 [2024-02-14 19:22:14.243383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:36.878 [2024-02-14 19:22:14.243407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:21:36.878 [2024-02-14 19:22:14.243426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.140 [2024-02-14 19:22:14.306674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.140 [2024-02-14 19:22:14.306744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:37.140 [2024-02-14 19:22:14.306772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.214 ms 00:21:37.140 [2024-02-14 19:22:14.306788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.140 [2024-02-14 19:22:14.317004] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:37.140 [2024-02-14 19:22:14.319185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.140 [2024-02-14 19:22:14.319233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:37.140 [2024-02-14 19:22:14.319263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.327 ms 00:21:37.140 [2024-02-14 19:22:14.319281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.140 [2024-02-14 19:22:14.319398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.140 [2024-02-14 19:22:14.319423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:37.140 [2024-02-14 19:22:14.319472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:37.140 [2024-02-14 19:22:14.319491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.140 [2024-02-14 19:22:14.320606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.140 [2024-02-14 19:22:14.320643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:37.140 [2024-02-14 19:22:14.320665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:21:37.140 [2024-02-14 19:22:14.320690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.140 [2024-02-14 19:22:14.322426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.140 [2024-02-14 19:22:14.322478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:37.140 [2024-02-14 19:22:14.322515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.693 ms 00:21:37.140 [2024-02-14 19:22:14.322532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.140 [2024-02-14 19:22:14.322587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.140 [2024-02-14 19:22:14.322611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:37.140 [2024-02-14 19:22:14.322630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:37.140 [2024-02-14 19:22:14.322647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.140 [2024-02-14 19:22:14.322733] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:37.140 [2024-02-14 19:22:14.322763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.140 [2024-02-14 19:22:14.322781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:37.140 [2024-02-14 19:22:14.322800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:37.140 [2024-02-14 19:22:14.322817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.140 [2024-02-14 19:22:14.348566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.140 [2024-02-14 19:22:14.348618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:37.140 [2024-02-14 19:22:14.348642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.708 ms 00:21:37.141 [2024-02-14 19:22:14.348667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.141 [2024-02-14 19:22:14.348759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.141 [2024-02-14 19:22:14.348785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:37.141 [2024-02-14 19:22:14.348804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:37.141 [2024-02-14 19:22:14.348836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.141 [2024-02-14 19:22:14.355853] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.737 ms, result 0 00:22:21.710  Copying: 21/1024 [MB] (21 MBps) Copying: 45/1024 [MB] (23 MBps) Copying: 69/1024 [MB] (23 MBps) Copying: 92/1024 [MB] (22 MBps) Copying: 115/1024 [MB] (23 MBps) Copying: 138/1024 [MB] (23 MBps) Copying: 162/1024 [MB] (23 MBps) Copying: 185/1024 [MB] (23 MBps) Copying: 208/1024 [MB] (22 MBps) Copying: 230/1024 [MB] (22 MBps) Copying: 253/1024 [MB] (23 MBps) Copying: 277/1024 [MB] (23 MBps) Copying: 300/1024 [MB] (23 MBps) Copying: 323/1024 [MB] (22 MBps) Copying: 345/1024 [MB] (22 MBps) Copying: 368/1024 [MB] (22 MBps) Copying: 391/1024 [MB] (22 MBps) Copying: 414/1024 [MB] (22 MBps) Copying: 437/1024 [MB] (22 MBps) Copying: 460/1024 [MB] (22 MBps) Copying: 483/1024 [MB] (23 MBps) Copying: 506/1024 [MB] (23 MBps) Copying: 529/1024 [MB] (23 MBps) Copying: 552/1024 [MB] (22 MBps) Copying: 575/1024 [MB] (23 MBps) Copying: 599/1024 [MB] (23 MBps) Copying: 621/1024 [MB] (22 MBps) Copying: 645/1024 [MB] (23 MBps) Copying: 668/1024 [MB] (23 MBps) Copying: 691/1024 [MB] (23 MBps) Copying: 714/1024 [MB] (22 MBps) Copying: 737/1024 [MB] (23 MBps) Copying: 760/1024 [MB] (23 MBps) Copying: 784/1024 [MB] (23 MBps) Copying: 806/1024 [MB] (22 MBps) Copying: 830/1024 [MB] (23 MBps) Copying: 853/1024 [MB] (23 MBps) Copying: 877/1024 [MB] (23 MBps) Copying: 900/1024 [MB] (23 MBps) Copying: 924/1024 [MB] (23 MBps) Copying: 947/1024 [MB] (23 MBps) Copying: 970/1024 [MB] (22 MBps) Copying: 993/1024 [MB] (23 MBps) Copying: 1016/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-02-14 19:22:59.022299] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:22:21.710 [2024-02-14 19:22:59.022713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.710 [2024-02-14 19:22:59.022793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:21.710 [2024-02-14 19:22:59.022822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:21.710 [2024-02-14 19:22:59.022857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.710 [2024-02-14 19:22:59.022900] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:21.710 [2024-02-14 19:22:59.026031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.710 [2024-02-14 19:22:59.026073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:21.710 [2024-02-14 19:22:59.026125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:22:21.710 [2024-02-14 19:22:59.026178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.710 [2024-02-14 19:22:59.026545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.710 [2024-02-14 19:22:59.026596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:21.710 [2024-02-14 19:22:59.026620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:22:21.710 [2024-02-14 19:22:59.026640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.710 [2024-02-14 19:22:59.032642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.710 [2024-02-14 19:22:59.032685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:21.710 [2024-02-14 19:22:59.032709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.970 ms 00:22:21.710 [2024-02-14 19:22:59.032728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.710 [2024-02-14 19:22:59.038247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.710 [2024-02-14 19:22:59.038282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:21.710 [2024-02-14 19:22:59.038304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.455 ms 00:22:21.710 [2024-02-14 19:22:59.038321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.710 [2024-02-14 19:22:59.062674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.710 [2024-02-14 19:22:59.062712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:21.710 [2024-02-14 19:22:59.062735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.257 ms 00:22:21.710 [2024-02-14 19:22:59.062753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.711 [2024-02-14 19:22:59.077149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.711 [2024-02-14 19:22:59.077185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:21.711 [2024-02-14 19:22:59.077207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.344 ms 00:22:21.711 [2024-02-14 19:22:59.077225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.971 [2024-02-14 19:22:59.194249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.971 [2024-02-14 19:22:59.194314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:21.971 [2024-02-14 19:22:59.194345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.969 ms 00:22:21.971 [2024-02-14 19:22:59.194365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.971 [2024-02-14 19:22:59.219799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.971 [2024-02-14 19:22:59.219837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:21.971 [2024-02-14 19:22:59.219858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.404 ms 00:22:21.971 [2024-02-14 19:22:59.219875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.971 [2024-02-14 19:22:59.244609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.971 [2024-02-14 19:22:59.244646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:21.971 [2024-02-14 19:22:59.244668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.684 ms 00:22:21.971 [2024-02-14 19:22:59.244684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.971 [2024-02-14 19:22:59.269075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.971 [2024-02-14 19:22:59.269110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:21.971 [2024-02-14 19:22:59.269131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.340 ms 00:22:21.971 [2024-02-14 19:22:59.269149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.971 [2024-02-14 19:22:59.293672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.971 [2024-02-14 19:22:59.293734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:21.971 [2024-02-14 19:22:59.293796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.420 ms 00:22:21.971 [2024-02-14 19:22:59.293816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.971 [2024-02-14 19:22:59.293873] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:21.971 [2024-02-14 19:22:59.293917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:22:21.971 [2024-02-14 19:22:59.293939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.293957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.293990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:21.971 [2024-02-14 19:22:59.294407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.294989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:21.972 [2024-02-14 19:22:59.295862] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:21.972 [2024-02-14 19:22:59.295880] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4322074a-dfb9-45be-b531-8eaf623071d6 00:22:21.972 [2024-02-14 19:22:59.295906] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:22:21.972 [2024-02-14 19:22:59.295922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 18624 00:22:21.972 [2024-02-14 19:22:59.295938] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 17664 00:22:21.973 [2024-02-14 19:22:59.295956] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0543 00:22:21.973 [2024-02-14 19:22:59.295974] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:21.973 [2024-02-14 19:22:59.295991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:21.973 [2024-02-14 19:22:59.296007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:21.973 [2024-02-14 19:22:59.296023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:21.973 [2024-02-14 19:22:59.296037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:21.973 [2024-02-14 19:22:59.296054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.973 [2024-02-14 19:22:59.296071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:21.973 [2024-02-14 19:22:59.296102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:22:21.973 [2024-02-14 19:22:59.296121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.973 [2024-02-14 19:22:59.310456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.973 [2024-02-14 19:22:59.310508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:21.973 [2024-02-14 19:22:59.310557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.279 ms 00:22:21.973 [2024-02-14 19:22:59.310576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.973 [2024-02-14 19:22:59.310923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.973 [2024-02-14 19:22:59.310958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:21.973 [2024-02-14 19:22:59.310980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:22:21.973 [2024-02-14 19:22:59.311014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.973 [2024-02-14 19:22:59.346344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.973 [2024-02-14 19:22:59.346384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:21.973 [2024-02-14 19:22:59.346405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.973 [2024-02-14 19:22:59.346421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.973 [2024-02-14 19:22:59.346523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.973 [2024-02-14 19:22:59.346547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:21.973 [2024-02-14 19:22:59.346566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.973 [2024-02-14 19:22:59.346589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.973 [2024-02-14 19:22:59.346745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.973 [2024-02-14 19:22:59.346773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:21.973 [2024-02-14 19:22:59.346793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.973 [2024-02-14 19:22:59.346809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.973 [2024-02-14 19:22:59.346849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.973 [2024-02-14 19:22:59.346871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:21.973 [2024-02-14 19:22:59.346902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.973 [2024-02-14 19:22:59.346919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.232 [2024-02-14 19:22:59.425871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.233 [2024-02-14 19:22:59.425950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:22.233 [2024-02-14 19:22:59.425976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.233 [2024-02-14 19:22:59.425993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.233 [2024-02-14 19:22:59.456513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.233 [2024-02-14 19:22:59.456562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:22.233 [2024-02-14 19:22:59.456585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.233 [2024-02-14 19:22:59.456612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.233 [2024-02-14 19:22:59.456717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.233 [2024-02-14 19:22:59.456744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:22.233 [2024-02-14 19:22:59.456779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.233 [2024-02-14 19:22:59.456824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.233 [2024-02-14 19:22:59.456917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.233 [2024-02-14 19:22:59.456943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:22.233 [2024-02-14 19:22:59.456964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.233 [2024-02-14 19:22:59.456982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.233 [2024-02-14 19:22:59.457136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.233 [2024-02-14 19:22:59.457178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:22.233 [2024-02-14 19:22:59.457200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.233 [2024-02-14 19:22:59.457219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.233 [2024-02-14 19:22:59.457287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.233 [2024-02-14 19:22:59.457312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:22.233 [2024-02-14 19:22:59.457331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.233 [2024-02-14 19:22:59.457348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.233 [2024-02-14 19:22:59.457413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.233 [2024-02-14 19:22:59.457444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:22.233 [2024-02-14 19:22:59.457463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.233 [2024-02-14 19:22:59.457479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.233 [2024-02-14 19:22:59.457574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.233 [2024-02-14 19:22:59.457600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:22.233 [2024-02-14 19:22:59.457619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.233 [2024-02-14 19:22:59.457636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.233 [2024-02-14 19:22:59.457872] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 435.068 ms, result 0 00:22:23.171 00:22:23.171 00:22:23.171 19:23:00 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:25.078 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:25.078 19:23:02 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:25.078 19:23:02 -- ftl/restore.sh@85 -- # restore_kill 00:22:25.078 19:23:02 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:25.078 19:23:02 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:25.078 19:23:02 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:25.078 19:23:02 -- ftl/restore.sh@32 -- # killprocess 74302 00:22:25.078 19:23:02 -- common/autotest_common.sh@924 -- # '[' -z 74302 ']' 00:22:25.078 19:23:02 -- common/autotest_common.sh@928 -- # kill -0 74302 00:22:25.078 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (74302) - No such process 00:22:25.078 Process with pid 74302 is not found 00:22:25.078 19:23:02 -- common/autotest_common.sh@951 -- # echo 'Process with pid 74302 is not found' 00:22:25.078 19:23:02 -- ftl/restore.sh@33 -- # remove_shm 00:22:25.078 Remove shared memory files 00:22:25.078 19:23:02 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:25.078 19:23:02 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:25.078 19:23:02 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:25.078 19:23:02 -- ftl/common.sh@207 -- # rm -f rm -f 00:22:25.078 19:23:02 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:25.078 19:23:02 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:25.078 00:22:25.078 real 3m26.804s 00:22:25.078 user 3m13.110s 00:22:25.078 sys 0m15.055s 00:22:25.078 19:23:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:25.078 ************************************ 00:22:25.078 END TEST ftl_restore 00:22:25.078 19:23:02 -- common/autotest_common.sh@10 -- # set +x 00:22:25.078 ************************************ 00:22:25.078 19:23:02 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:25.078 19:23:02 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:22:25.078 19:23:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:22:25.078 19:23:02 -- common/autotest_common.sh@10 -- # set +x 00:22:25.078 ************************************ 00:22:25.078 START TEST ftl_dirty_shutdown 00:22:25.078 ************************************ 00:22:25.078 19:23:02 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:25.078 * Looking for test storage... 00:22:25.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:25.078 19:23:02 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:25.078 19:23:02 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:25.078 19:23:02 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:25.078 19:23:02 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:25.078 19:23:02 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:25.078 19:23:02 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:25.078 19:23:02 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:25.078 19:23:02 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:25.078 19:23:02 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:25.078 19:23:02 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.078 19:23:02 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.078 19:23:02 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:25.078 19:23:02 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:25.078 19:23:02 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:25.078 19:23:02 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:25.078 19:23:02 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:25.078 19:23:02 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:25.078 19:23:02 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.078 19:23:02 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.078 19:23:02 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:25.079 19:23:02 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:25.079 19:23:02 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:25.079 19:23:02 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:25.079 19:23:02 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:25.079 19:23:02 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:25.079 19:23:02 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:25.079 19:23:02 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:25.079 19:23:02 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:25.079 19:23:02 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@45 -- # svcpid=76507 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:25.079 19:23:02 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76507 00:22:25.079 19:23:02 -- common/autotest_common.sh@817 -- # '[' -z 76507 ']' 00:22:25.079 19:23:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.079 19:23:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:25.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.079 19:23:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.079 19:23:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:25.079 19:23:02 -- common/autotest_common.sh@10 -- # set +x 00:22:25.079 [2024-02-14 19:23:02.495352] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:25.079 [2024-02-14 19:23:02.495527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76507 ] 00:22:25.338 [2024-02-14 19:23:02.668998] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.598 [2024-02-14 19:23:02.892913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:25.598 [2024-02-14 19:23:02.893272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.977 19:23:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:26.977 19:23:04 -- common/autotest_common.sh@850 -- # return 0 00:22:26.977 19:23:04 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:22:26.977 19:23:04 -- ftl/common.sh@54 -- # local name=nvme0 00:22:26.977 19:23:04 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:22:26.977 19:23:04 -- ftl/common.sh@56 -- # local size=103424 00:22:26.977 19:23:04 -- ftl/common.sh@59 -- # local base_bdev 00:22:26.977 19:23:04 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:27.236 19:23:04 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:27.236 19:23:04 -- ftl/common.sh@62 -- # local base_size 00:22:27.236 19:23:04 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:27.236 19:23:04 -- common/autotest_common.sh@1355 -- # local bdev_name=nvme0n1 00:22:27.236 19:23:04 -- common/autotest_common.sh@1356 -- # local bdev_info 00:22:27.236 19:23:04 -- common/autotest_common.sh@1357 -- # local bs 00:22:27.236 19:23:04 -- common/autotest_common.sh@1358 -- # local nb 00:22:27.236 19:23:04 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:27.496 19:23:04 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:22:27.496 { 00:22:27.496 "name": "nvme0n1", 00:22:27.496 "aliases": [ 00:22:27.496 "7482b653-9fd4-4c78-a947-4a23e5b6309b" 00:22:27.496 ], 00:22:27.496 "product_name": "NVMe disk", 00:22:27.496 "block_size": 4096, 00:22:27.496 "num_blocks": 1310720, 00:22:27.496 "uuid": "7482b653-9fd4-4c78-a947-4a23e5b6309b", 00:22:27.496 "assigned_rate_limits": { 00:22:27.496 "rw_ios_per_sec": 0, 00:22:27.496 "rw_mbytes_per_sec": 0, 00:22:27.496 "r_mbytes_per_sec": 0, 00:22:27.496 "w_mbytes_per_sec": 0 00:22:27.496 }, 00:22:27.496 "claimed": true, 00:22:27.496 "claim_type": "read_many_write_one", 00:22:27.496 "zoned": false, 00:22:27.496 "supported_io_types": { 00:22:27.496 "read": true, 00:22:27.496 "write": true, 00:22:27.496 "unmap": true, 00:22:27.496 "write_zeroes": true, 00:22:27.496 "flush": true, 00:22:27.496 "reset": true, 00:22:27.496 "compare": true, 00:22:27.496 "compare_and_write": false, 00:22:27.496 "abort": true, 00:22:27.496 "nvme_admin": true, 00:22:27.496 "nvme_io": true 00:22:27.496 }, 00:22:27.496 "driver_specific": { 00:22:27.496 "nvme": [ 00:22:27.496 { 00:22:27.496 "pci_address": "0000:00:07.0", 00:22:27.496 "trid": { 00:22:27.496 "trtype": "PCIe", 00:22:27.496 "traddr": "0000:00:07.0" 00:22:27.496 }, 00:22:27.496 "ctrlr_data": { 00:22:27.496 "cntlid": 0, 00:22:27.496 "vendor_id": "0x1b36", 00:22:27.496 "model_number": "QEMU NVMe Ctrl", 00:22:27.496 "serial_number": "12341", 00:22:27.496 "firmware_revision": "8.0.0", 00:22:27.496 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:27.496 "oacs": { 00:22:27.496 "security": 0, 00:22:27.496 "format": 1, 00:22:27.496 "firmware": 0, 00:22:27.496 "ns_manage": 1 00:22:27.496 }, 00:22:27.496 "multi_ctrlr": false, 00:22:27.496 "ana_reporting": false 00:22:27.496 }, 00:22:27.496 "vs": { 00:22:27.496 "nvme_version": "1.4" 00:22:27.496 }, 00:22:27.496 "ns_data": { 00:22:27.496 "id": 1, 00:22:27.496 "can_share": false 00:22:27.496 } 00:22:27.496 } 00:22:27.496 ], 00:22:27.496 "mp_policy": "active_passive" 00:22:27.496 } 00:22:27.496 } 00:22:27.496 ]' 00:22:27.496 19:23:04 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:22:27.496 19:23:04 -- common/autotest_common.sh@1360 -- # bs=4096 00:22:27.496 19:23:04 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:22:27.496 19:23:04 -- common/autotest_common.sh@1361 -- # nb=1310720 00:22:27.496 19:23:04 -- common/autotest_common.sh@1364 -- # bdev_size=5120 00:22:27.496 19:23:04 -- common/autotest_common.sh@1365 -- # echo 5120 00:22:27.496 19:23:04 -- ftl/common.sh@63 -- # base_size=5120 00:22:27.496 19:23:04 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:27.496 19:23:04 -- ftl/common.sh@67 -- # clear_lvols 00:22:27.497 19:23:04 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:27.497 19:23:04 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:27.756 19:23:04 -- ftl/common.sh@28 -- # stores=df0c28da-f497-4a3a-852f-a282cc5d2695 00:22:27.756 19:23:04 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:27.756 19:23:04 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df0c28da-f497-4a3a-852f-a282cc5d2695 00:22:27.756 19:23:05 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:28.016 19:23:05 -- ftl/common.sh@68 -- # lvs=612c6e47-eb96-4989-9a65-7fabe7788283 00:22:28.016 19:23:05 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 612c6e47-eb96-4989-9a65-7fabe7788283 00:22:28.276 19:23:05 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=0213bc57-7273-4b8a-8cee-92b1892addc2 00:22:28.276 19:23:05 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:22:28.276 19:23:05 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 0213bc57-7273-4b8a-8cee-92b1892addc2 00:22:28.276 19:23:05 -- ftl/common.sh@35 -- # local name=nvc0 00:22:28.276 19:23:05 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:22:28.276 19:23:05 -- ftl/common.sh@37 -- # local base_bdev=0213bc57-7273-4b8a-8cee-92b1892addc2 00:22:28.276 19:23:05 -- ftl/common.sh@38 -- # local cache_size= 00:22:28.276 19:23:05 -- ftl/common.sh@41 -- # get_bdev_size 0213bc57-7273-4b8a-8cee-92b1892addc2 00:22:28.276 19:23:05 -- common/autotest_common.sh@1355 -- # local bdev_name=0213bc57-7273-4b8a-8cee-92b1892addc2 00:22:28.276 19:23:05 -- common/autotest_common.sh@1356 -- # local bdev_info 00:22:28.276 19:23:05 -- common/autotest_common.sh@1357 -- # local bs 00:22:28.276 19:23:05 -- common/autotest_common.sh@1358 -- # local nb 00:22:28.276 19:23:05 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0213bc57-7273-4b8a-8cee-92b1892addc2 00:22:28.534 19:23:05 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:22:28.534 { 00:22:28.534 "name": "0213bc57-7273-4b8a-8cee-92b1892addc2", 00:22:28.534 "aliases": [ 00:22:28.534 "lvs/nvme0n1p0" 00:22:28.535 ], 00:22:28.535 "product_name": "Logical Volume", 00:22:28.535 "block_size": 4096, 00:22:28.535 "num_blocks": 26476544, 00:22:28.535 "uuid": "0213bc57-7273-4b8a-8cee-92b1892addc2", 00:22:28.535 "assigned_rate_limits": { 00:22:28.535 "rw_ios_per_sec": 0, 00:22:28.535 "rw_mbytes_per_sec": 0, 00:22:28.535 "r_mbytes_per_sec": 0, 00:22:28.535 "w_mbytes_per_sec": 0 00:22:28.535 }, 00:22:28.535 "claimed": false, 00:22:28.535 "zoned": false, 00:22:28.535 "supported_io_types": { 00:22:28.535 "read": true, 00:22:28.535 "write": true, 00:22:28.535 "unmap": true, 00:22:28.535 "write_zeroes": true, 00:22:28.535 "flush": false, 00:22:28.535 "reset": true, 00:22:28.535 "compare": false, 00:22:28.535 "compare_and_write": false, 00:22:28.535 "abort": false, 00:22:28.535 "nvme_admin": false, 00:22:28.535 "nvme_io": false 00:22:28.535 }, 00:22:28.535 "driver_specific": { 00:22:28.535 "lvol": { 00:22:28.535 "lvol_store_uuid": "612c6e47-eb96-4989-9a65-7fabe7788283", 00:22:28.535 "base_bdev": "nvme0n1", 00:22:28.535 "thin_provision": true, 00:22:28.535 "snapshot": false, 00:22:28.535 "clone": false, 00:22:28.535 "esnap_clone": false 00:22:28.535 } 00:22:28.535 } 00:22:28.535 } 00:22:28.535 ]' 00:22:28.535 19:23:05 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:22:28.535 19:23:05 -- common/autotest_common.sh@1360 -- # bs=4096 00:22:28.535 19:23:05 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:22:28.793 19:23:05 -- common/autotest_common.sh@1361 -- # nb=26476544 00:22:28.793 19:23:05 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:22:28.793 19:23:05 -- common/autotest_common.sh@1365 -- # echo 103424 00:22:28.793 19:23:05 -- ftl/common.sh@41 -- # local base_size=5171 00:22:28.793 19:23:05 -- ftl/common.sh@44 -- # local nvc_bdev 00:22:28.793 19:23:05 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:22:28.793 19:23:06 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:28.793 19:23:06 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:28.793 19:23:06 -- ftl/common.sh@48 -- # get_bdev_size 0213bc57-7273-4b8a-8cee-92b1892addc2 00:22:28.793 19:23:06 -- common/autotest_common.sh@1355 -- # local bdev_name=0213bc57-7273-4b8a-8cee-92b1892addc2 00:22:28.793 19:23:06 -- common/autotest_common.sh@1356 -- # local bdev_info 00:22:28.793 19:23:06 -- common/autotest_common.sh@1357 -- # local bs 00:22:28.793 19:23:06 -- common/autotest_common.sh@1358 -- # local nb 00:22:28.793 19:23:06 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0213bc57-7273-4b8a-8cee-92b1892addc2 00:22:29.052 19:23:06 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:22:29.052 { 00:22:29.052 "name": "0213bc57-7273-4b8a-8cee-92b1892addc2", 00:22:29.052 "aliases": [ 00:22:29.052 "lvs/nvme0n1p0" 00:22:29.052 ], 00:22:29.052 "product_name": "Logical Volume", 00:22:29.052 "block_size": 4096, 00:22:29.052 "num_blocks": 26476544, 00:22:29.052 "uuid": "0213bc57-7273-4b8a-8cee-92b1892addc2", 00:22:29.052 "assigned_rate_limits": { 00:22:29.052 "rw_ios_per_sec": 0, 00:22:29.052 "rw_mbytes_per_sec": 0, 00:22:29.052 "r_mbytes_per_sec": 0, 00:22:29.052 "w_mbytes_per_sec": 0 00:22:29.052 }, 00:22:29.052 "claimed": false, 00:22:29.052 "zoned": false, 00:22:29.052 "supported_io_types": { 00:22:29.052 "read": true, 00:22:29.052 "write": true, 00:22:29.052 "unmap": true, 00:22:29.052 "write_zeroes": true, 00:22:29.052 "flush": false, 00:22:29.052 "reset": true, 00:22:29.052 "compare": false, 00:22:29.052 "compare_and_write": false, 00:22:29.052 "abort": false, 00:22:29.052 "nvme_admin": false, 00:22:29.052 "nvme_io": false 00:22:29.052 }, 00:22:29.052 "driver_specific": { 00:22:29.052 "lvol": { 00:22:29.052 "lvol_store_uuid": "612c6e47-eb96-4989-9a65-7fabe7788283", 00:22:29.052 "base_bdev": "nvme0n1", 00:22:29.052 "thin_provision": true, 00:22:29.052 "snapshot": false, 00:22:29.052 "clone": false, 00:22:29.052 "esnap_clone": false 00:22:29.052 } 00:22:29.052 } 00:22:29.052 } 00:22:29.052 ]' 00:22:29.052 19:23:06 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:22:29.310 19:23:06 -- common/autotest_common.sh@1360 -- # bs=4096 00:22:29.310 19:23:06 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:22:29.310 19:23:06 -- common/autotest_common.sh@1361 -- # nb=26476544 00:22:29.310 19:23:06 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:22:29.310 19:23:06 -- common/autotest_common.sh@1365 -- # echo 103424 00:22:29.310 19:23:06 -- ftl/common.sh@48 -- # cache_size=5171 00:22:29.310 19:23:06 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:29.310 19:23:06 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:29.310 19:23:06 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 0213bc57-7273-4b8a-8cee-92b1892addc2 00:22:29.310 19:23:06 -- common/autotest_common.sh@1355 -- # local bdev_name=0213bc57-7273-4b8a-8cee-92b1892addc2 00:22:29.310 19:23:06 -- common/autotest_common.sh@1356 -- # local bdev_info 00:22:29.310 19:23:06 -- common/autotest_common.sh@1357 -- # local bs 00:22:29.310 19:23:06 -- common/autotest_common.sh@1358 -- # local nb 00:22:29.310 19:23:06 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0213bc57-7273-4b8a-8cee-92b1892addc2 00:22:29.569 19:23:06 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:22:29.569 { 00:22:29.569 "name": "0213bc57-7273-4b8a-8cee-92b1892addc2", 00:22:29.569 "aliases": [ 00:22:29.569 "lvs/nvme0n1p0" 00:22:29.569 ], 00:22:29.569 "product_name": "Logical Volume", 00:22:29.569 "block_size": 4096, 00:22:29.569 "num_blocks": 26476544, 00:22:29.569 "uuid": "0213bc57-7273-4b8a-8cee-92b1892addc2", 00:22:29.569 "assigned_rate_limits": { 00:22:29.569 "rw_ios_per_sec": 0, 00:22:29.569 "rw_mbytes_per_sec": 0, 00:22:29.569 "r_mbytes_per_sec": 0, 00:22:29.569 "w_mbytes_per_sec": 0 00:22:29.569 }, 00:22:29.569 "claimed": false, 00:22:29.569 "zoned": false, 00:22:29.569 "supported_io_types": { 00:22:29.569 "read": true, 00:22:29.569 "write": true, 00:22:29.569 "unmap": true, 00:22:29.569 "write_zeroes": true, 00:22:29.569 "flush": false, 00:22:29.569 "reset": true, 00:22:29.569 "compare": false, 00:22:29.569 "compare_and_write": false, 00:22:29.569 "abort": false, 00:22:29.569 "nvme_admin": false, 00:22:29.569 "nvme_io": false 00:22:29.569 }, 00:22:29.569 "driver_specific": { 00:22:29.569 "lvol": { 00:22:29.569 "lvol_store_uuid": "612c6e47-eb96-4989-9a65-7fabe7788283", 00:22:29.569 "base_bdev": "nvme0n1", 00:22:29.569 "thin_provision": true, 00:22:29.569 "snapshot": false, 00:22:29.569 "clone": false, 00:22:29.569 "esnap_clone": false 00:22:29.569 } 00:22:29.569 } 00:22:29.569 } 00:22:29.569 ]' 00:22:29.569 19:23:06 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:22:29.569 19:23:06 -- common/autotest_common.sh@1360 -- # bs=4096 00:22:29.569 19:23:06 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:22:29.829 19:23:07 -- common/autotest_common.sh@1361 -- # nb=26476544 00:22:29.829 19:23:07 -- common/autotest_common.sh@1364 -- # bdev_size=103424 00:22:29.829 19:23:07 -- common/autotest_common.sh@1365 -- # echo 103424 00:22:29.829 19:23:07 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:29.829 19:23:07 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0213bc57-7273-4b8a-8cee-92b1892addc2 --l2p_dram_limit 10' 00:22:29.829 19:23:07 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:29.829 19:23:07 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:22:29.829 19:23:07 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:29.829 19:23:07 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0213bc57-7273-4b8a-8cee-92b1892addc2 --l2p_dram_limit 10 -c nvc0n1p0 00:22:29.829 [2024-02-14 19:23:07.202686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.829 [2024-02-14 19:23:07.202735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:29.829 [2024-02-14 19:23:07.202773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:29.829 [2024-02-14 19:23:07.202785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.829 [2024-02-14 19:23:07.202857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.829 [2024-02-14 19:23:07.202874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.829 [2024-02-14 19:23:07.202887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:29.829 [2024-02-14 19:23:07.202897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.829 [2024-02-14 19:23:07.202926] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:29.829 [2024-02-14 19:23:07.203942] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:29.829 [2024-02-14 19:23:07.203983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.829 [2024-02-14 19:23:07.203997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.829 [2024-02-14 19:23:07.204011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:22:29.829 [2024-02-14 19:23:07.204023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.829 [2024-02-14 19:23:07.204178] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d995f5ad-0556-475a-9889-bdde60ce73db 00:22:29.829 [2024-02-14 19:23:07.205125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.829 [2024-02-14 19:23:07.205197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:29.829 [2024-02-14 19:23:07.205228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:29.829 [2024-02-14 19:23:07.205240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.829 [2024-02-14 19:23:07.209577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.829 [2024-02-14 19:23:07.209622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.829 [2024-02-14 19:23:07.209637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.277 ms 00:22:29.829 [2024-02-14 19:23:07.209649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.829 [2024-02-14 19:23:07.209776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.829 [2024-02-14 19:23:07.209813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.829 [2024-02-14 19:23:07.209827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:29.829 [2024-02-14 19:23:07.209842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.829 [2024-02-14 19:23:07.209909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.829 [2024-02-14 19:23:07.209931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:29.829 [2024-02-14 19:23:07.209946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:29.829 [2024-02-14 19:23:07.209959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.830 [2024-02-14 19:23:07.209988] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:29.830 [2024-02-14 19:23:07.213977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.830 [2024-02-14 19:23:07.214015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.830 [2024-02-14 19:23:07.214064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.992 ms 00:22:29.830 [2024-02-14 19:23:07.214090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.830 [2024-02-14 19:23:07.214162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.830 [2024-02-14 19:23:07.214177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:29.830 [2024-02-14 19:23:07.214190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:29.830 [2024-02-14 19:23:07.214200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.830 [2024-02-14 19:23:07.214243] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:29.830 [2024-02-14 19:23:07.214355] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:29.830 [2024-02-14 19:23:07.214374] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:29.830 [2024-02-14 19:23:07.214388] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:29.830 [2024-02-14 19:23:07.214403] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:29.830 [2024-02-14 19:23:07.214415] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:29.830 [2024-02-14 19:23:07.214428] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:29.830 [2024-02-14 19:23:07.214440] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:29.830 [2024-02-14 19:23:07.214451] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:29.830 [2024-02-14 19:23:07.214460] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:29.830 [2024-02-14 19:23:07.214487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.830 [2024-02-14 19:23:07.214498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:29.830 [2024-02-14 19:23:07.214510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:22:29.830 [2024-02-14 19:23:07.214520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.830 [2024-02-14 19:23:07.214598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.830 [2024-02-14 19:23:07.214614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:29.830 [2024-02-14 19:23:07.214627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:29.830 [2024-02-14 19:23:07.214638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.830 [2024-02-14 19:23:07.214715] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:29.830 [2024-02-14 19:23:07.214730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:29.830 [2024-02-14 19:23:07.214743] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.830 [2024-02-14 19:23:07.214754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.830 [2024-02-14 19:23:07.214766] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:29.830 [2024-02-14 19:23:07.214776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:29.830 [2024-02-14 19:23:07.214788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:29.830 [2024-02-14 19:23:07.214798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:29.830 [2024-02-14 19:23:07.214810] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:29.830 [2024-02-14 19:23:07.214820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.830 [2024-02-14 19:23:07.214830] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:29.830 [2024-02-14 19:23:07.214841] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:29.830 [2024-02-14 19:23:07.214852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.830 [2024-02-14 19:23:07.214862] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:29.830 [2024-02-14 19:23:07.214874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:29.830 [2024-02-14 19:23:07.214883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.830 [2024-02-14 19:23:07.214899] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:29.830 [2024-02-14 19:23:07.214909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:29.830 [2024-02-14 19:23:07.214920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.830 [2024-02-14 19:23:07.214930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:29.830 [2024-02-14 19:23:07.214942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:29.830 [2024-02-14 19:23:07.214951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:29.830 [2024-02-14 19:23:07.214962] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:29.830 [2024-02-14 19:23:07.214971] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:29.830 [2024-02-14 19:23:07.214982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:29.830 [2024-02-14 19:23:07.214992] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:29.830 [2024-02-14 19:23:07.215003] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:29.830 [2024-02-14 19:23:07.215012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:29.830 [2024-02-14 19:23:07.215023] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:29.830 [2024-02-14 19:23:07.215032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:29.830 [2024-02-14 19:23:07.215043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:29.830 [2024-02-14 19:23:07.215052] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:29.830 [2024-02-14 19:23:07.215065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:29.830 [2024-02-14 19:23:07.215074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:29.830 [2024-02-14 19:23:07.215085] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:29.830 [2024-02-14 19:23:07.215095] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:29.830 [2024-02-14 19:23:07.215106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.830 [2024-02-14 19:23:07.215115] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:29.830 [2024-02-14 19:23:07.215126] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:29.830 [2024-02-14 19:23:07.215136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.830 [2024-02-14 19:23:07.215146] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:29.830 [2024-02-14 19:23:07.215157] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:29.830 [2024-02-14 19:23:07.215170] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.830 [2024-02-14 19:23:07.215180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.830 [2024-02-14 19:23:07.215192] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:29.830 [2024-02-14 19:23:07.215202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:29.830 [2024-02-14 19:23:07.215213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:29.830 [2024-02-14 19:23:07.215224] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:29.830 [2024-02-14 19:23:07.215254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:29.830 [2024-02-14 19:23:07.215264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:29.830 [2024-02-14 19:23:07.215279] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:29.830 [2024-02-14 19:23:07.215295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.830 [2024-02-14 19:23:07.215309] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:29.830 [2024-02-14 19:23:07.215320] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:29.830 [2024-02-14 19:23:07.215332] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:29.830 [2024-02-14 19:23:07.215343] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:29.830 [2024-02-14 19:23:07.215355] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:29.830 [2024-02-14 19:23:07.215366] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:29.830 [2024-02-14 19:23:07.215378] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:29.830 [2024-02-14 19:23:07.215388] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:29.830 [2024-02-14 19:23:07.215400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:29.830 [2024-02-14 19:23:07.215411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:29.830 [2024-02-14 19:23:07.215423] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:29.830 [2024-02-14 19:23:07.215434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:29.830 [2024-02-14 19:23:07.215449] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:29.830 [2024-02-14 19:23:07.215460] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:29.830 [2024-02-14 19:23:07.215473] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.830 [2024-02-14 19:23:07.215485] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:29.830 [2024-02-14 19:23:07.215531] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:29.830 [2024-02-14 19:23:07.215545] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:29.831 [2024-02-14 19:23:07.215559] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:29.831 [2024-02-14 19:23:07.215571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.831 [2024-02-14 19:23:07.215584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:29.831 [2024-02-14 19:23:07.215610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:22:29.831 [2024-02-14 19:23:07.215639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.831 [2024-02-14 19:23:07.231583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.831 [2024-02-14 19:23:07.231642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:29.831 [2024-02-14 19:23:07.231659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.883 ms 00:22:29.831 [2024-02-14 19:23:07.231672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.831 [2024-02-14 19:23:07.231762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.831 [2024-02-14 19:23:07.231782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:29.831 [2024-02-14 19:23:07.231794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:29.831 [2024-02-14 19:23:07.231808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.089 [2024-02-14 19:23:07.268800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.089 [2024-02-14 19:23:07.268863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:30.089 [2024-02-14 19:23:07.268912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.935 ms 00:22:30.089 [2024-02-14 19:23:07.268925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.089 [2024-02-14 19:23:07.268972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.089 [2024-02-14 19:23:07.268989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:30.089 [2024-02-14 19:23:07.269001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:30.089 [2024-02-14 19:23:07.269013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.089 [2024-02-14 19:23:07.269375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.089 [2024-02-14 19:23:07.269395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:30.089 [2024-02-14 19:23:07.269408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:22:30.089 [2024-02-14 19:23:07.269420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.089 [2024-02-14 19:23:07.269606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.089 [2024-02-14 19:23:07.269632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:30.090 [2024-02-14 19:23:07.269645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:22:30.090 [2024-02-14 19:23:07.269658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.090 [2024-02-14 19:23:07.284735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.090 [2024-02-14 19:23:07.284792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:30.090 [2024-02-14 19:23:07.284825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.035 ms 00:22:30.090 [2024-02-14 19:23:07.284838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.090 [2024-02-14 19:23:07.296004] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:30.090 [2024-02-14 19:23:07.298702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.090 [2024-02-14 19:23:07.298736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:30.090 [2024-02-14 19:23:07.298771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.741 ms 00:22:30.090 [2024-02-14 19:23:07.298787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.090 [2024-02-14 19:23:07.365309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.090 [2024-02-14 19:23:07.365367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:30.090 [2024-02-14 19:23:07.365405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.484 ms 00:22:30.090 [2024-02-14 19:23:07.365417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.090 [2024-02-14 19:23:07.365475] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:22:30.090 [2024-02-14 19:23:07.365528] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:22:32.622 [2024-02-14 19:23:09.918931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.622 [2024-02-14 19:23:09.918998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:32.622 [2024-02-14 19:23:09.919036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2553.467 ms 00:22:32.622 [2024-02-14 19:23:09.919047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.622 [2024-02-14 19:23:09.919252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.622 [2024-02-14 19:23:09.919270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:32.622 [2024-02-14 19:23:09.919287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:22:32.622 [2024-02-14 19:23:09.919297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.622 [2024-02-14 19:23:09.943909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.622 [2024-02-14 19:23:09.943947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:32.622 [2024-02-14 19:23:09.943965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.554 ms 00:22:32.622 [2024-02-14 19:23:09.943977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.622 [2024-02-14 19:23:09.968009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.622 [2024-02-14 19:23:09.968046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:32.622 [2024-02-14 19:23:09.968067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.987 ms 00:22:32.622 [2024-02-14 19:23:09.968077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.622 [2024-02-14 19:23:09.968391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.622 [2024-02-14 19:23:09.968411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:32.622 [2024-02-14 19:23:09.968424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:22:32.622 [2024-02-14 19:23:09.968434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.622 [2024-02-14 19:23:10.038014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.622 [2024-02-14 19:23:10.038077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:32.622 [2024-02-14 19:23:10.038135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.532 ms 00:22:32.622 [2024-02-14 19:23:10.038147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.881 [2024-02-14 19:23:10.064557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.881 [2024-02-14 19:23:10.064596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:32.881 [2024-02-14 19:23:10.064618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.303 ms 00:22:32.881 [2024-02-14 19:23:10.064629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.881 [2024-02-14 19:23:10.066337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.881 [2024-02-14 19:23:10.066375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:32.881 [2024-02-14 19:23:10.066394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:22:32.881 [2024-02-14 19:23:10.066405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.881 [2024-02-14 19:23:10.091301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.881 [2024-02-14 19:23:10.091340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:32.881 [2024-02-14 19:23:10.091358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.847 ms 00:22:32.881 [2024-02-14 19:23:10.091368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.881 [2024-02-14 19:23:10.091426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.881 [2024-02-14 19:23:10.091445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:32.881 [2024-02-14 19:23:10.091458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:32.881 [2024-02-14 19:23:10.091468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.881 [2024-02-14 19:23:10.091610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.881 [2024-02-14 19:23:10.091632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:32.881 [2024-02-14 19:23:10.091646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:32.881 [2024-02-14 19:23:10.091656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.881 [2024-02-14 19:23:10.092934] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2889.676 ms, result 0 00:22:32.881 { 00:22:32.881 "name": "ftl0", 00:22:32.881 "uuid": "d995f5ad-0556-475a-9889-bdde60ce73db" 00:22:32.881 } 00:22:32.881 19:23:10 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:32.881 19:23:10 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:33.140 19:23:10 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:33.140 19:23:10 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:33.140 19:23:10 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:33.399 /dev/nbd0 00:22:33.399 19:23:10 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:33.399 19:23:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:22:33.399 19:23:10 -- common/autotest_common.sh@855 -- # local i 00:22:33.399 19:23:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:33.399 19:23:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:33.399 19:23:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:22:33.399 19:23:10 -- common/autotest_common.sh@859 -- # break 00:22:33.399 19:23:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:33.399 19:23:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:33.399 19:23:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:33.399 1+0 records in 00:22:33.399 1+0 records out 00:22:33.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534199 s, 7.7 MB/s 00:22:33.399 19:23:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:33.399 19:23:10 -- common/autotest_common.sh@872 -- # size=4096 00:22:33.399 19:23:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:33.399 19:23:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:33.399 19:23:10 -- common/autotest_common.sh@875 -- # return 0 00:22:33.399 19:23:10 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:33.399 [2024-02-14 19:23:10.749497] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:33.399 [2024-02-14 19:23:10.749665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76647 ] 00:22:33.658 [2024-02-14 19:23:10.918450] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.917 [2024-02-14 19:23:11.126616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.999  Copying: 202/1024 [MB] (202 MBps) Copying: 408/1024 [MB] (206 MBps) Copying: 617/1024 [MB] (208 MBps) Copying: 826/1024 [MB] (208 MBps) Copying: 1024/1024 [MB] (average 205 MBps) 00:22:39.999 00:22:39.999 19:23:17 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:41.901 19:23:19 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:41.901 [2024-02-14 19:23:19.120105] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:41.901 [2024-02-14 19:23:19.120256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76734 ] 00:22:41.901 [2024-02-14 19:23:19.271391] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.159 [2024-02-14 19:23:19.417072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.118  Copying: 12/1024 [MB] (12 MBps) Copying: 25/1024 [MB] (12 MBps) Copying: 40/1024 [MB] (14 MBps) Copying: 55/1024 [MB] (15 MBps) Copying: 70/1024 [MB] (15 MBps) Copying: 85/1024 [MB] (14 MBps) Copying: 101/1024 [MB] (15 MBps) Copying: 116/1024 [MB] (15 MBps) Copying: 131/1024 [MB] (15 MBps) Copying: 146/1024 [MB] (15 MBps) Copying: 161/1024 [MB] (15 MBps) Copying: 177/1024 [MB] (15 MBps) Copying: 192/1024 [MB] (15 MBps) Copying: 207/1024 [MB] (15 MBps) Copying: 223/1024 [MB] (15 MBps) Copying: 238/1024 [MB] (15 MBps) Copying: 253/1024 [MB] (15 MBps) Copying: 268/1024 [MB] (15 MBps) Copying: 283/1024 [MB] (15 MBps) Copying: 298/1024 [MB] (15 MBps) Copying: 313/1024 [MB] (14 MBps) Copying: 329/1024 [MB] (15 MBps) Copying: 344/1024 [MB] (15 MBps) Copying: 359/1024 [MB] (15 MBps) Copying: 374/1024 [MB] (15 MBps) Copying: 390/1024 [MB] (15 MBps) Copying: 405/1024 [MB] (15 MBps) Copying: 420/1024 [MB] (14 MBps) Copying: 435/1024 [MB] (15 MBps) Copying: 450/1024 [MB] (15 MBps) Copying: 465/1024 [MB] (14 MBps) Copying: 480/1024 [MB] (15 MBps) Copying: 496/1024 [MB] (15 MBps) Copying: 511/1024 [MB] (15 MBps) Copying: 526/1024 [MB] (14 MBps) Copying: 541/1024 [MB] (14 MBps) Copying: 556/1024 [MB] (15 MBps) Copying: 571/1024 [MB] (15 MBps) Copying: 587/1024 [MB] (15 MBps) Copying: 602/1024 [MB] (15 MBps) Copying: 617/1024 [MB] (14 MBps) Copying: 632/1024 [MB] (15 MBps) Copying: 647/1024 [MB] (14 MBps) Copying: 661/1024 [MB] (14 MBps) Copying: 676/1024 [MB] (15 MBps) Copying: 691/1024 [MB] (14 MBps) Copying: 706/1024 [MB] (14 MBps) Copying: 722/1024 [MB] (15 MBps) Copying: 737/1024 [MB] (15 MBps) Copying: 752/1024 [MB] (15 MBps) Copying: 767/1024 [MB] (15 MBps) Copying: 783/1024 [MB] (15 MBps) Copying: 798/1024 [MB] (15 MBps) Copying: 813/1024 [MB] (15 MBps) Copying: 828/1024 [MB] (15 MBps) Copying: 843/1024 [MB] (15 MBps) Copying: 858/1024 [MB] (14 MBps) Copying: 874/1024 [MB] (15 MBps) Copying: 889/1024 [MB] (15 MBps) Copying: 904/1024 [MB] (15 MBps) Copying: 919/1024 [MB] (15 MBps) Copying: 934/1024 [MB] (15 MBps) Copying: 950/1024 [MB] (15 MBps) Copying: 965/1024 [MB] (14 MBps) Copying: 980/1024 [MB] (15 MBps) Copying: 995/1024 [MB] (15 MBps) Copying: 1010/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:23:51.118 00:23:51.118 19:24:28 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:51.118 19:24:28 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:51.377 19:24:28 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:51.637 [2024-02-14 19:24:28.908940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.637 [2024-02-14 19:24:28.909012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:51.637 [2024-02-14 19:24:28.909045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:51.637 [2024-02-14 19:24:28.909059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.637 [2024-02-14 19:24:28.909093] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:51.637 [2024-02-14 19:24:28.912391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.637 [2024-02-14 19:24:28.912422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:51.637 [2024-02-14 19:24:28.912439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:23:51.637 [2024-02-14 19:24:28.912450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.637 [2024-02-14 19:24:28.914835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.637 [2024-02-14 19:24:28.914921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:51.637 [2024-02-14 19:24:28.914958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:23:51.637 [2024-02-14 19:24:28.914973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.637 [2024-02-14 19:24:28.930741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.637 [2024-02-14 19:24:28.930790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:51.637 [2024-02-14 19:24:28.930811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.736 ms 00:23:51.637 [2024-02-14 19:24:28.930823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.637 [2024-02-14 19:24:28.936808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.637 [2024-02-14 19:24:28.936889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:51.637 [2024-02-14 19:24:28.936924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.894 ms 00:23:51.637 [2024-02-14 19:24:28.936935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.637 [2024-02-14 19:24:28.963054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.637 [2024-02-14 19:24:28.963091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:51.637 [2024-02-14 19:24:28.963126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.025 ms 00:23:51.637 [2024-02-14 19:24:28.963137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.637 [2024-02-14 19:24:28.978656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.637 [2024-02-14 19:24:28.978696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:51.637 [2024-02-14 19:24:28.978731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.472 ms 00:23:51.637 [2024-02-14 19:24:28.978743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.637 [2024-02-14 19:24:28.978898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.637 [2024-02-14 19:24:28.978918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:51.637 [2024-02-14 19:24:28.978932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:23:51.637 [2024-02-14 19:24:28.978943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.637 [2024-02-14 19:24:29.004779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.637 [2024-02-14 19:24:29.004815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:51.637 [2024-02-14 19:24:29.004850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.809 ms 00:23:51.637 [2024-02-14 19:24:29.004860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.637 [2024-02-14 19:24:29.030170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.637 [2024-02-14 19:24:29.030220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:51.637 [2024-02-14 19:24:29.030268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.262 ms 00:23:51.637 [2024-02-14 19:24:29.030278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.897 [2024-02-14 19:24:29.056688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.897 [2024-02-14 19:24:29.056723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:51.897 [2024-02-14 19:24:29.056753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.363 ms 00:23:51.897 [2024-02-14 19:24:29.056763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.897 [2024-02-14 19:24:29.082521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.897 [2024-02-14 19:24:29.082582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:51.897 [2024-02-14 19:24:29.082617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.654 ms 00:23:51.897 [2024-02-14 19:24:29.082628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.897 [2024-02-14 19:24:29.082692] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:51.897 [2024-02-14 19:24:29.082715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.082998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:51.897 [2024-02-14 19:24:29.083385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.083985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.084002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.084014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.084027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.084039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.084067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:51.898 [2024-02-14 19:24:29.084086] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:51.898 [2024-02-14 19:24:29.084099] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d995f5ad-0556-475a-9889-bdde60ce73db 00:23:51.898 [2024-02-14 19:24:29.084110] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:51.898 [2024-02-14 19:24:29.084126] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:51.898 [2024-02-14 19:24:29.084136] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:51.898 [2024-02-14 19:24:29.084149] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:51.898 [2024-02-14 19:24:29.084160] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:51.898 [2024-02-14 19:24:29.084173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:51.898 [2024-02-14 19:24:29.084183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:51.898 [2024-02-14 19:24:29.084195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:51.898 [2024-02-14 19:24:29.084204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:51.898 [2024-02-14 19:24:29.084219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.898 [2024-02-14 19:24:29.084229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:51.898 [2024-02-14 19:24:29.084243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.530 ms 00:23:51.898 [2024-02-14 19:24:29.084254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.898 [2024-02-14 19:24:29.098858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.898 [2024-02-14 19:24:29.098894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:51.898 [2024-02-14 19:24:29.098930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.557 ms 00:23:51.898 [2024-02-14 19:24:29.098942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.898 [2024-02-14 19:24:29.099178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.898 [2024-02-14 19:24:29.099200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:51.898 [2024-02-14 19:24:29.099215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:23:51.898 [2024-02-14 19:24:29.099225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.898 [2024-02-14 19:24:29.146514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.898 [2024-02-14 19:24:29.146551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:51.898 [2024-02-14 19:24:29.146568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.898 [2024-02-14 19:24:29.146578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.898 [2024-02-14 19:24:29.146641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.898 [2024-02-14 19:24:29.146656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:51.898 [2024-02-14 19:24:29.146668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.898 [2024-02-14 19:24:29.146678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.898 [2024-02-14 19:24:29.146769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.898 [2024-02-14 19:24:29.146787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:51.898 [2024-02-14 19:24:29.146801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.898 [2024-02-14 19:24:29.146811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.898 [2024-02-14 19:24:29.146835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.898 [2024-02-14 19:24:29.146847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:51.898 [2024-02-14 19:24:29.146859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.898 [2024-02-14 19:24:29.146869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.898 [2024-02-14 19:24:29.224094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.898 [2024-02-14 19:24:29.224150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:51.898 [2024-02-14 19:24:29.224169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.898 [2024-02-14 19:24:29.224180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.898 [2024-02-14 19:24:29.254171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.898 [2024-02-14 19:24:29.254207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:51.898 [2024-02-14 19:24:29.254226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.898 [2024-02-14 19:24:29.254236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.898 [2024-02-14 19:24:29.254319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.899 [2024-02-14 19:24:29.254339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:51.899 [2024-02-14 19:24:29.254352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.899 [2024-02-14 19:24:29.254361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.899 [2024-02-14 19:24:29.254416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.899 [2024-02-14 19:24:29.254432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:51.899 [2024-02-14 19:24:29.254444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.899 [2024-02-14 19:24:29.254454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.899 [2024-02-14 19:24:29.254615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.899 [2024-02-14 19:24:29.254634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:51.899 [2024-02-14 19:24:29.254651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.899 [2024-02-14 19:24:29.254662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.899 [2024-02-14 19:24:29.254715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.899 [2024-02-14 19:24:29.254732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:51.899 [2024-02-14 19:24:29.254746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.899 [2024-02-14 19:24:29.254756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.899 [2024-02-14 19:24:29.254812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.899 [2024-02-14 19:24:29.254827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:51.899 [2024-02-14 19:24:29.254857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.899 [2024-02-14 19:24:29.254883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.899 [2024-02-14 19:24:29.254967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.899 [2024-02-14 19:24:29.254984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:51.899 [2024-02-14 19:24:29.254997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.899 [2024-02-14 19:24:29.255008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.899 [2024-02-14 19:24:29.255157] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.176 ms, result 0 00:23:51.899 true 00:23:51.899 19:24:29 -- ftl/dirty_shutdown.sh@83 -- # kill -9 76507 00:23:51.899 19:24:29 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76507 00:23:51.899 19:24:29 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:52.157 [2024-02-14 19:24:29.380852] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:23:52.157 [2024-02-14 19:24:29.381017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77442 ] 00:23:52.157 [2024-02-14 19:24:29.547889] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.416 [2024-02-14 19:24:29.694618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.104  Copying: 219/1024 [MB] (219 MBps) Copying: 439/1024 [MB] (219 MBps) Copying: 659/1024 [MB] (220 MBps) Copying: 874/1024 [MB] (215 MBps) Copying: 1024/1024 [MB] (average 217 MBps) 00:23:58.104 00:23:58.104 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76507 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:58.104 19:24:35 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:58.362 [2024-02-14 19:24:35.591961] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:23:58.362 [2024-02-14 19:24:35.592125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77508 ] 00:23:58.362 [2024-02-14 19:24:35.757439] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.622 [2024-02-14 19:24:35.903732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.622 [2024-02-14 19:24:35.903836] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:23:58.881 [2024-02-14 19:24:36.161021] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.881 [2024-02-14 19:24:36.161098] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.881 [2024-02-14 19:24:36.223213] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:58.881 [2024-02-14 19:24:36.223670] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:58.881 [2024-02-14 19:24:36.223964] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:59.141 [2024-02-14 19:24:36.490465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.141 [2024-02-14 19:24:36.490537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:59.141 [2024-02-14 19:24:36.490573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:59.141 [2024-02-14 19:24:36.490583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.141 [2024-02-14 19:24:36.490645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.141 [2024-02-14 19:24:36.490662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:59.141 [2024-02-14 19:24:36.490676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:59.141 [2024-02-14 19:24:36.490686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.141 [2024-02-14 19:24:36.490714] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:59.141 [2024-02-14 19:24:36.491601] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:59.141 [2024-02-14 19:24:36.491655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.141 [2024-02-14 19:24:36.491668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:59.141 [2024-02-14 19:24:36.491678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:23:59.141 [2024-02-14 19:24:36.491687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.141 [2024-02-14 19:24:36.492810] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:59.141 [2024-02-14 19:24:36.505458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.141 [2024-02-14 19:24:36.505504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:59.141 [2024-02-14 19:24:36.505534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.649 ms 00:23:59.141 [2024-02-14 19:24:36.505544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.141 [2024-02-14 19:24:36.505620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.141 [2024-02-14 19:24:36.505641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:59.142 [2024-02-14 19:24:36.505652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:59.142 [2024-02-14 19:24:36.505661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.142 [2024-02-14 19:24:36.510038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.142 [2024-02-14 19:24:36.510075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:59.142 [2024-02-14 19:24:36.510105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.297 ms 00:23:59.142 [2024-02-14 19:24:36.510115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.142 [2024-02-14 19:24:36.510319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.142 [2024-02-14 19:24:36.510337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:59.142 [2024-02-14 19:24:36.510347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:23:59.142 [2024-02-14 19:24:36.510356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.142 [2024-02-14 19:24:36.510415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.142 [2024-02-14 19:24:36.510431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:59.142 [2024-02-14 19:24:36.510441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:59.142 [2024-02-14 19:24:36.510449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.142 [2024-02-14 19:24:36.510478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:59.142 [2024-02-14 19:24:36.513999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.142 [2024-02-14 19:24:36.514032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:59.142 [2024-02-14 19:24:36.514060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.528 ms 00:23:59.142 [2024-02-14 19:24:36.514074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.142 [2024-02-14 19:24:36.514110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.142 [2024-02-14 19:24:36.514139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:59.142 [2024-02-14 19:24:36.514149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:59.142 [2024-02-14 19:24:36.514157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.142 [2024-02-14 19:24:36.514182] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:59.142 [2024-02-14 19:24:36.514207] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:59.142 [2024-02-14 19:24:36.514241] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:59.142 [2024-02-14 19:24:36.514259] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:59.142 [2024-02-14 19:24:36.514324] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:59.142 [2024-02-14 19:24:36.514336] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:59.142 [2024-02-14 19:24:36.514347] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:59.142 [2024-02-14 19:24:36.514358] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:59.142 [2024-02-14 19:24:36.514369] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:59.142 [2024-02-14 19:24:36.514379] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:59.142 [2024-02-14 19:24:36.514387] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:59.142 [2024-02-14 19:24:36.514396] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:59.142 [2024-02-14 19:24:36.514404] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:59.142 [2024-02-14 19:24:36.514417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.142 [2024-02-14 19:24:36.514426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:59.142 [2024-02-14 19:24:36.514435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:23:59.142 [2024-02-14 19:24:36.514443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.142 [2024-02-14 19:24:36.514512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.142 [2024-02-14 19:24:36.514525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:59.142 [2024-02-14 19:24:36.514572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:59.142 [2024-02-14 19:24:36.514582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.142 [2024-02-14 19:24:36.514675] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:59.142 [2024-02-14 19:24:36.514699] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:59.142 [2024-02-14 19:24:36.514710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.142 [2024-02-14 19:24:36.514720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.142 [2024-02-14 19:24:36.514730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:59.142 [2024-02-14 19:24:36.514738] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:59.142 [2024-02-14 19:24:36.514747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:59.142 [2024-02-14 19:24:36.514756] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:59.142 [2024-02-14 19:24:36.514764] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:59.142 [2024-02-14 19:24:36.514772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.142 [2024-02-14 19:24:36.514780] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:59.142 [2024-02-14 19:24:36.514790] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:59.142 [2024-02-14 19:24:36.514814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.142 [2024-02-14 19:24:36.514838] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:59.142 [2024-02-14 19:24:36.514846] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:59.142 [2024-02-14 19:24:36.514855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.142 [2024-02-14 19:24:36.514891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:59.142 [2024-02-14 19:24:36.514901] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:59.142 [2024-02-14 19:24:36.514910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.142 [2024-02-14 19:24:36.514933] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:59.142 [2024-02-14 19:24:36.514942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:59.142 [2024-02-14 19:24:36.514951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:59.142 [2024-02-14 19:24:36.514959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:59.142 [2024-02-14 19:24:36.514968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:59.142 [2024-02-14 19:24:36.514977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:59.142 [2024-02-14 19:24:36.514985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:59.142 [2024-02-14 19:24:36.514994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:59.142 [2024-02-14 19:24:36.515002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:59.142 [2024-02-14 19:24:36.515011] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:59.142 [2024-02-14 19:24:36.515019] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:59.142 [2024-02-14 19:24:36.515028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:59.142 [2024-02-14 19:24:36.515037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:59.142 [2024-02-14 19:24:36.515045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:59.142 [2024-02-14 19:24:36.515054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:59.142 [2024-02-14 19:24:36.515062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:59.142 [2024-02-14 19:24:36.515071] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:59.142 [2024-02-14 19:24:36.515079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.142 [2024-02-14 19:24:36.515087] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:59.142 [2024-02-14 19:24:36.515096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:59.142 [2024-02-14 19:24:36.515104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.142 [2024-02-14 19:24:36.515113] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:59.142 [2024-02-14 19:24:36.515123] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:59.142 [2024-02-14 19:24:36.515132] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.142 [2024-02-14 19:24:36.515142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.142 [2024-02-14 19:24:36.515152] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:59.142 [2024-02-14 19:24:36.515162] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:59.142 [2024-02-14 19:24:36.515170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:59.142 [2024-02-14 19:24:36.515179] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:59.142 [2024-02-14 19:24:36.515188] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:59.142 [2024-02-14 19:24:36.515197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:59.142 [2024-02-14 19:24:36.515207] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:59.142 [2024-02-14 19:24:36.515218] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.142 [2024-02-14 19:24:36.515229] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:59.142 [2024-02-14 19:24:36.515239] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:59.143 [2024-02-14 19:24:36.515249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:59.143 [2024-02-14 19:24:36.515258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:59.143 [2024-02-14 19:24:36.515267] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:59.143 [2024-02-14 19:24:36.515277] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:59.143 [2024-02-14 19:24:36.515286] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:59.143 [2024-02-14 19:24:36.515296] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:59.143 [2024-02-14 19:24:36.515305] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:59.143 [2024-02-14 19:24:36.515315] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:59.143 [2024-02-14 19:24:36.515324] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:59.143 [2024-02-14 19:24:36.515334] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:59.143 [2024-02-14 19:24:36.515344] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:59.143 [2024-02-14 19:24:36.515353] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:59.143 [2024-02-14 19:24:36.515368] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.143 [2024-02-14 19:24:36.515379] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:59.143 [2024-02-14 19:24:36.515388] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:59.143 [2024-02-14 19:24:36.515397] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:59.143 [2024-02-14 19:24:36.515407] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:59.143 [2024-02-14 19:24:36.515418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.143 [2024-02-14 19:24:36.515427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:59.143 [2024-02-14 19:24:36.515437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:23:59.143 [2024-02-14 19:24:36.515450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.143 [2024-02-14 19:24:36.531025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.143 [2024-02-14 19:24:36.531064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.143 [2024-02-14 19:24:36.531095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.527 ms 00:23:59.143 [2024-02-14 19:24:36.531105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.143 [2024-02-14 19:24:36.531187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.143 [2024-02-14 19:24:36.531202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:59.143 [2024-02-14 19:24:36.531212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:59.143 [2024-02-14 19:24:36.531220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.404 [2024-02-14 19:24:36.576647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.404 [2024-02-14 19:24:36.576694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:59.404 [2024-02-14 19:24:36.576711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.364 ms 00:23:59.404 [2024-02-14 19:24:36.576720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.404 [2024-02-14 19:24:36.576779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.404 [2024-02-14 19:24:36.576794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:59.404 [2024-02-14 19:24:36.576804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:59.404 [2024-02-14 19:24:36.576819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.404 [2024-02-14 19:24:36.577159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.404 [2024-02-14 19:24:36.577176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:59.404 [2024-02-14 19:24:36.577188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:23:59.404 [2024-02-14 19:24:36.577197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.404 [2024-02-14 19:24:36.577318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.404 [2024-02-14 19:24:36.577334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:59.404 [2024-02-14 19:24:36.577345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:59.404 [2024-02-14 19:24:36.577353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.404 [2024-02-14 19:24:36.591408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.404 [2024-02-14 19:24:36.591445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:59.404 [2024-02-14 19:24:36.591460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.028 ms 00:23:59.404 [2024-02-14 19:24:36.591473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.404 [2024-02-14 19:24:36.604678] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:59.404 [2024-02-14 19:24:36.604715] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:59.404 [2024-02-14 19:24:36.604730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.404 [2024-02-14 19:24:36.604740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:59.404 [2024-02-14 19:24:36.604750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.093 ms 00:23:59.404 [2024-02-14 19:24:36.604758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.404 [2024-02-14 19:24:36.628557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.404 [2024-02-14 19:24:36.628597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:59.404 [2024-02-14 19:24:36.628612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.758 ms 00:23:59.404 [2024-02-14 19:24:36.628621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.404 [2024-02-14 19:24:36.641379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.404 [2024-02-14 19:24:36.641415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:59.404 [2024-02-14 19:24:36.641429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.718 ms 00:23:59.404 [2024-02-14 19:24:36.641437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.404 [2024-02-14 19:24:36.653906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.405 [2024-02-14 19:24:36.653954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:59.405 [2024-02-14 19:24:36.653984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.431 ms 00:23:59.405 [2024-02-14 19:24:36.653993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.405 [2024-02-14 19:24:36.654405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.405 [2024-02-14 19:24:36.654429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:59.405 [2024-02-14 19:24:36.654441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:23:59.405 [2024-02-14 19:24:36.654450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.405 [2024-02-14 19:24:36.713052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.405 [2024-02-14 19:24:36.713111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:59.405 [2024-02-14 19:24:36.713129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.577 ms 00:23:59.405 [2024-02-14 19:24:36.713145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.405 [2024-02-14 19:24:36.723164] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:59.405 [2024-02-14 19:24:36.725161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.405 [2024-02-14 19:24:36.725190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:59.405 [2024-02-14 19:24:36.725204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.958 ms 00:23:59.405 [2024-02-14 19:24:36.725213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.405 [2024-02-14 19:24:36.725291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.405 [2024-02-14 19:24:36.725308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:59.405 [2024-02-14 19:24:36.725318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:59.405 [2024-02-14 19:24:36.725327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.405 [2024-02-14 19:24:36.725404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.405 [2024-02-14 19:24:36.725420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:59.405 [2024-02-14 19:24:36.725430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:59.405 [2024-02-14 19:24:36.725438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.405 [2024-02-14 19:24:36.727269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.405 [2024-02-14 19:24:36.727307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:59.405 [2024-02-14 19:24:36.727320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.812 ms 00:23:59.405 [2024-02-14 19:24:36.727328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.405 [2024-02-14 19:24:36.727360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.405 [2024-02-14 19:24:36.727379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:59.405 [2024-02-14 19:24:36.727389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:59.405 [2024-02-14 19:24:36.727397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.405 [2024-02-14 19:24:36.727433] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:59.405 [2024-02-14 19:24:36.727449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.405 [2024-02-14 19:24:36.727458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:59.405 [2024-02-14 19:24:36.727466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:59.405 [2024-02-14 19:24:36.727475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.405 [2024-02-14 19:24:36.751728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.405 [2024-02-14 19:24:36.751767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:59.405 [2024-02-14 19:24:36.751809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.185 ms 00:23:59.405 [2024-02-14 19:24:36.751819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.405 [2024-02-14 19:24:36.751913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.405 [2024-02-14 19:24:36.751929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:59.405 [2024-02-14 19:24:36.751939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:59.405 [2024-02-14 19:24:36.751948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.405 [2024-02-14 19:24:36.753383] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 262.324 ms, result 0 00:24:43.368  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (24 MBps) Copying: 166/1024 [MB] (24 MBps) Copying: 190/1024 [MB] (23 MBps) Copying: 214/1024 [MB] (24 MBps) Copying: 238/1024 [MB] (23 MBps) Copying: 262/1024 [MB] (23 MBps) Copying: 286/1024 [MB] (23 MBps) Copying: 309/1024 [MB] (23 MBps) Copying: 332/1024 [MB] (23 MBps) Copying: 356/1024 [MB] (23 MBps) Copying: 380/1024 [MB] (23 MBps) Copying: 403/1024 [MB] (23 MBps) Copying: 427/1024 [MB] (23 MBps) Copying: 450/1024 [MB] (23 MBps) Copying: 474/1024 [MB] (23 MBps) Copying: 498/1024 [MB] (23 MBps) Copying: 522/1024 [MB] (23 MBps) Copying: 545/1024 [MB] (23 MBps) Copying: 569/1024 [MB] (24 MBps) Copying: 593/1024 [MB] (24 MBps) Copying: 618/1024 [MB] (24 MBps) Copying: 642/1024 [MB] (24 MBps) Copying: 666/1024 [MB] (24 MBps) Copying: 689/1024 [MB] (23 MBps) Copying: 713/1024 [MB] (23 MBps) Copying: 737/1024 [MB] (24 MBps) Copying: 761/1024 [MB] (24 MBps) Copying: 786/1024 [MB] (24 MBps) Copying: 810/1024 [MB] (23 MBps) Copying: 834/1024 [MB] (24 MBps) Copying: 858/1024 [MB] (24 MBps) Copying: 882/1024 [MB] (24 MBps) Copying: 906/1024 [MB] (24 MBps) Copying: 930/1024 [MB] (23 MBps) Copying: 954/1024 [MB] (24 MBps) Copying: 978/1024 [MB] (24 MBps) Copying: 1003/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (19 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-02-14 19:25:20.764260] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:24:43.368 [2024-02-14 19:25:20.772876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.368 [2024-02-14 19:25:20.772948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:43.368 [2024-02-14 19:25:20.772984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:43.368 [2024-02-14 19:25:20.772994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.368 [2024-02-14 19:25:20.774107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:43.368 [2024-02-14 19:25:20.779245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.368 [2024-02-14 19:25:20.779281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:43.368 [2024-02-14 19:25:20.779297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.039 ms 00:24:43.368 [2024-02-14 19:25:20.779306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.626 [2024-02-14 19:25:20.792185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.626 [2024-02-14 19:25:20.792224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:43.626 [2024-02-14 19:25:20.792256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.618 ms 00:24:43.626 [2024-02-14 19:25:20.792266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.626 [2024-02-14 19:25:20.812451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.626 [2024-02-14 19:25:20.812505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:43.626 [2024-02-14 19:25:20.812538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.164 ms 00:24:43.626 [2024-02-14 19:25:20.812548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.626 [2024-02-14 19:25:20.818323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.626 [2024-02-14 19:25:20.818353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:43.627 [2024-02-14 19:25:20.818366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.739 ms 00:24:43.627 [2024-02-14 19:25:20.818381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.627 [2024-02-14 19:25:20.842958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.627 [2024-02-14 19:25:20.842997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:43.627 [2024-02-14 19:25:20.843011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.516 ms 00:24:43.627 [2024-02-14 19:25:20.843020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.627 [2024-02-14 19:25:20.857469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.627 [2024-02-14 19:25:20.857511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:43.627 [2024-02-14 19:25:20.857534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.411 ms 00:24:43.627 [2024-02-14 19:25:20.857544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.627 [2024-02-14 19:25:20.967689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.627 [2024-02-14 19:25:20.967734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:43.627 [2024-02-14 19:25:20.967780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.103 ms 00:24:43.627 [2024-02-14 19:25:20.967791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.627 [2024-02-14 19:25:20.996852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.627 [2024-02-14 19:25:20.996903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:43.627 [2024-02-14 19:25:20.996917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.040 ms 00:24:43.627 [2024-02-14 19:25:20.996926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.627 [2024-02-14 19:25:21.021799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.627 [2024-02-14 19:25:21.021869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:43.627 [2024-02-14 19:25:21.021915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.819 ms 00:24:43.627 [2024-02-14 19:25:21.021940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.887 [2024-02-14 19:25:21.046941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.887 [2024-02-14 19:25:21.046978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:43.887 [2024-02-14 19:25:21.046992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.955 ms 00:24:43.887 [2024-02-14 19:25:21.047000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.887 [2024-02-14 19:25:21.071450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.887 [2024-02-14 19:25:21.071495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:43.887 [2024-02-14 19:25:21.071528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.341 ms 00:24:43.887 [2024-02-14 19:25:21.071537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.887 [2024-02-14 19:25:21.071577] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:43.887 [2024-02-14 19:25:21.071598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130048 / 261120 wr_cnt: 1 state: open 00:24:43.887 [2024-02-14 19:25:21.071610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:43.887 [2024-02-14 19:25:21.071761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.071992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:43.888 [2024-02-14 19:25:21.072724] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:43.888 [2024-02-14 19:25:21.072735] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d995f5ad-0556-475a-9889-bdde60ce73db 00:24:43.888 [2024-02-14 19:25:21.072747] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130048 00:24:43.888 [2024-02-14 19:25:21.072756] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131008 00:24:43.888 [2024-02-14 19:25:21.072766] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130048 00:24:43.889 [2024-02-14 19:25:21.072777] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:24:43.889 [2024-02-14 19:25:21.072787] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:43.889 [2024-02-14 19:25:21.072798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:43.889 [2024-02-14 19:25:21.072821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:43.889 [2024-02-14 19:25:21.072830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:43.889 [2024-02-14 19:25:21.072838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:43.889 [2024-02-14 19:25:21.072848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.889 [2024-02-14 19:25:21.072862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:43.889 [2024-02-14 19:25:21.072872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:24:43.889 [2024-02-14 19:25:21.072881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.085916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.889 [2024-02-14 19:25:21.085950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:43.889 [2024-02-14 19:25:21.085980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.998 ms 00:24:43.889 [2024-02-14 19:25:21.085989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.086213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.889 [2024-02-14 19:25:21.086229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:43.889 [2024-02-14 19:25:21.086239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:24:43.889 [2024-02-14 19:25:21.086248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.121370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.889 [2024-02-14 19:25:21.121409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:43.889 [2024-02-14 19:25:21.121422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.889 [2024-02-14 19:25:21.121437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.121519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.889 [2024-02-14 19:25:21.121536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:43.889 [2024-02-14 19:25:21.121546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.889 [2024-02-14 19:25:21.121563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.121657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.889 [2024-02-14 19:25:21.121675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:43.889 [2024-02-14 19:25:21.121687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.889 [2024-02-14 19:25:21.121696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.121723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.889 [2024-02-14 19:25:21.121736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:43.889 [2024-02-14 19:25:21.121745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.889 [2024-02-14 19:25:21.121754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.196346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.889 [2024-02-14 19:25:21.196400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:43.889 [2024-02-14 19:25:21.196415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.889 [2024-02-14 19:25:21.196424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.226603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.889 [2024-02-14 19:25:21.226639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:43.889 [2024-02-14 19:25:21.226653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.889 [2024-02-14 19:25:21.226662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.226734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.889 [2024-02-14 19:25:21.226751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.889 [2024-02-14 19:25:21.226760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.889 [2024-02-14 19:25:21.226768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.226814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.889 [2024-02-14 19:25:21.226836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.889 [2024-02-14 19:25:21.226845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.889 [2024-02-14 19:25:21.226854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.226958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.889 [2024-02-14 19:25:21.226975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.889 [2024-02-14 19:25:21.226985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.889 [2024-02-14 19:25:21.226993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.227035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.889 [2024-02-14 19:25:21.227049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:43.889 [2024-02-14 19:25:21.227064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.889 [2024-02-14 19:25:21.227073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.227111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.889 [2024-02-14 19:25:21.227125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.889 [2024-02-14 19:25:21.227134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.889 [2024-02-14 19:25:21.227143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.227188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.889 [2024-02-14 19:25:21.227207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.889 [2024-02-14 19:25:21.227216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.889 [2024-02-14 19:25:21.227225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.889 [2024-02-14 19:25:21.227342] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 457.439 ms, result 0 00:24:45.267 00:24:45.267 00:24:45.267 19:25:22 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:47.171 19:25:24 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:47.171 [2024-02-14 19:25:24.332726] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:47.171 [2024-02-14 19:25:24.332833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77996 ] 00:24:47.171 [2024-02-14 19:25:24.491904] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.430 [2024-02-14 19:25:24.689307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.430 [2024-02-14 19:25:24.689399] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:24:47.688 [2024-02-14 19:25:24.937333] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:47.688 [2024-02-14 19:25:24.937412] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:47.688 [2024-02-14 19:25:25.086674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.688 [2024-02-14 19:25:25.086718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:47.688 [2024-02-14 19:25:25.086756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:47.688 [2024-02-14 19:25:25.086766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.688 [2024-02-14 19:25:25.086827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.688 [2024-02-14 19:25:25.086844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:47.688 [2024-02-14 19:25:25.086855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:47.688 [2024-02-14 19:25:25.086864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.688 [2024-02-14 19:25:25.086891] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:47.688 [2024-02-14 19:25:25.087651] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:47.688 [2024-02-14 19:25:25.087680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.688 [2024-02-14 19:25:25.087698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:47.688 [2024-02-14 19:25:25.087709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:24:47.688 [2024-02-14 19:25:25.087722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.688 [2024-02-14 19:25:25.088756] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:47.688 [2024-02-14 19:25:25.101645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.688 [2024-02-14 19:25:25.101697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:47.688 [2024-02-14 19:25:25.101714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.890 ms 00:24:47.688 [2024-02-14 19:25:25.101723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.688 [2024-02-14 19:25:25.101783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.688 [2024-02-14 19:25:25.101801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:47.688 [2024-02-14 19:25:25.101812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:47.688 [2024-02-14 19:25:25.101820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.948 [2024-02-14 19:25:25.106640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.948 [2024-02-14 19:25:25.106677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:47.948 [2024-02-14 19:25:25.106708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.704 ms 00:24:47.948 [2024-02-14 19:25:25.106717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.948 [2024-02-14 19:25:25.106860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.948 [2024-02-14 19:25:25.106879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:47.948 [2024-02-14 19:25:25.106893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:24:47.948 [2024-02-14 19:25:25.106903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.948 [2024-02-14 19:25:25.106968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.948 [2024-02-14 19:25:25.106984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:47.948 [2024-02-14 19:25:25.106996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:47.948 [2024-02-14 19:25:25.107005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.948 [2024-02-14 19:25:25.107034] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:47.948 [2024-02-14 19:25:25.110945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.948 [2024-02-14 19:25:25.110979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:47.948 [2024-02-14 19:25:25.111009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.918 ms 00:24:47.948 [2024-02-14 19:25:25.111019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.948 [2024-02-14 19:25:25.111065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.948 [2024-02-14 19:25:25.111080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:47.948 [2024-02-14 19:25:25.111094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:47.948 [2024-02-14 19:25:25.111110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.948 [2024-02-14 19:25:25.111150] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:47.948 [2024-02-14 19:25:25.111177] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:47.948 [2024-02-14 19:25:25.111212] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:47.948 [2024-02-14 19:25:25.111234] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:47.948 [2024-02-14 19:25:25.111305] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:47.948 [2024-02-14 19:25:25.111322] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:47.948 [2024-02-14 19:25:25.111333] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:47.948 [2024-02-14 19:25:25.111345] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:47.948 [2024-02-14 19:25:25.111363] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:47.948 [2024-02-14 19:25:25.111376] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:47.949 [2024-02-14 19:25:25.111389] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:47.949 [2024-02-14 19:25:25.111401] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:47.949 [2024-02-14 19:25:25.111409] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:47.949 [2024-02-14 19:25:25.111420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.949 [2024-02-14 19:25:25.111430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:47.949 [2024-02-14 19:25:25.111440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:24:47.949 [2024-02-14 19:25:25.111452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.949 [2024-02-14 19:25:25.111585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.949 [2024-02-14 19:25:25.111623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:47.949 [2024-02-14 19:25:25.111641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:24:47.949 [2024-02-14 19:25:25.111651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.949 [2024-02-14 19:25:25.111735] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:47.949 [2024-02-14 19:25:25.111751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:47.949 [2024-02-14 19:25:25.111762] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:47.949 [2024-02-14 19:25:25.111772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.949 [2024-02-14 19:25:25.111788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:47.949 [2024-02-14 19:25:25.111797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:47.949 [2024-02-14 19:25:25.111806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:47.949 [2024-02-14 19:25:25.111815] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:47.949 [2024-02-14 19:25:25.111824] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:47.949 [2024-02-14 19:25:25.111834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:47.949 [2024-02-14 19:25:25.111843] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:47.949 [2024-02-14 19:25:25.111852] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:47.949 [2024-02-14 19:25:25.111860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:47.949 [2024-02-14 19:25:25.111870] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:47.949 [2024-02-14 19:25:25.111895] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:47.949 [2024-02-14 19:25:25.111905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.949 [2024-02-14 19:25:25.111929] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:47.949 [2024-02-14 19:25:25.111938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:47.949 [2024-02-14 19:25:25.111960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.949 [2024-02-14 19:25:25.111969] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:47.949 [2024-02-14 19:25:25.112005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:47.949 [2024-02-14 19:25:25.112014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:47.949 [2024-02-14 19:25:25.112023] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:47.949 [2024-02-14 19:25:25.112032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:47.949 [2024-02-14 19:25:25.112041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:47.949 [2024-02-14 19:25:25.112050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:47.949 [2024-02-14 19:25:25.112059] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:47.949 [2024-02-14 19:25:25.112068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:47.949 [2024-02-14 19:25:25.112077] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:47.949 [2024-02-14 19:25:25.112086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:47.949 [2024-02-14 19:25:25.112094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:47.949 [2024-02-14 19:25:25.112103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:47.949 [2024-02-14 19:25:25.112112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:47.949 [2024-02-14 19:25:25.112121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:47.949 [2024-02-14 19:25:25.112130] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:47.949 [2024-02-14 19:25:25.112139] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:47.949 [2024-02-14 19:25:25.112148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:47.949 [2024-02-14 19:25:25.112156] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:47.949 [2024-02-14 19:25:25.112165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:47.949 [2024-02-14 19:25:25.112174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:47.949 [2024-02-14 19:25:25.112183] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:47.949 [2024-02-14 19:25:25.112193] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:47.949 [2024-02-14 19:25:25.112202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:47.949 [2024-02-14 19:25:25.112212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.949 [2024-02-14 19:25:25.112221] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:47.949 [2024-02-14 19:25:25.112232] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:47.949 [2024-02-14 19:25:25.112241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:47.949 [2024-02-14 19:25:25.112256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:47.949 [2024-02-14 19:25:25.112278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:47.949 [2024-02-14 19:25:25.112288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:47.949 [2024-02-14 19:25:25.112297] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:47.949 [2024-02-14 19:25:25.112309] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:47.949 [2024-02-14 19:25:25.112321] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:47.949 [2024-02-14 19:25:25.112331] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:47.949 [2024-02-14 19:25:25.112340] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:47.949 [2024-02-14 19:25:25.112350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:47.949 [2024-02-14 19:25:25.112360] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:47.949 [2024-02-14 19:25:25.112369] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:47.949 [2024-02-14 19:25:25.112379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:47.949 [2024-02-14 19:25:25.112388] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:47.949 [2024-02-14 19:25:25.112398] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:47.949 [2024-02-14 19:25:25.112407] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:47.949 [2024-02-14 19:25:25.112418] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:47.949 [2024-02-14 19:25:25.112427] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:47.949 [2024-02-14 19:25:25.112437] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:47.949 [2024-02-14 19:25:25.112447] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:47.949 [2024-02-14 19:25:25.112458] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:47.949 [2024-02-14 19:25:25.112468] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:47.949 [2024-02-14 19:25:25.112478] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:47.949 [2024-02-14 19:25:25.112487] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:47.949 [2024-02-14 19:25:25.112497] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:47.949 [2024-02-14 19:25:25.112508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.949 [2024-02-14 19:25:25.112534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:47.949 [2024-02-14 19:25:25.112549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:24:47.949 [2024-02-14 19:25:25.112558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.949 [2024-02-14 19:25:25.127407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.949 [2024-02-14 19:25:25.127447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:47.949 [2024-02-14 19:25:25.127468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.772 ms 00:24:47.949 [2024-02-14 19:25:25.127477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.949 [2024-02-14 19:25:25.127604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.949 [2024-02-14 19:25:25.127622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:47.949 [2024-02-14 19:25:25.127634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:47.949 [2024-02-14 19:25:25.127643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.949 [2024-02-14 19:25:25.172995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.949 [2024-02-14 19:25:25.173038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.949 [2024-02-14 19:25:25.173055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.291 ms 00:24:47.949 [2024-02-14 19:25:25.173064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.949 [2024-02-14 19:25:25.173114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.173130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.950 [2024-02-14 19:25:25.173140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:47.950 [2024-02-14 19:25:25.173149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.173501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.173540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.950 [2024-02-14 19:25:25.173552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:24:47.950 [2024-02-14 19:25:25.173560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.173701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.173718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.950 [2024-02-14 19:25:25.173728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:24:47.950 [2024-02-14 19:25:25.173737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.187610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.187647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.950 [2024-02-14 19:25:25.187662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.847 ms 00:24:47.950 [2024-02-14 19:25:25.187671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.200522] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:47.950 [2024-02-14 19:25:25.200559] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:47.950 [2024-02-14 19:25:25.200594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.200620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:47.950 [2024-02-14 19:25:25.200633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.811 ms 00:24:47.950 [2024-02-14 19:25:25.200643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.223677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.223715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:47.950 [2024-02-14 19:25:25.223731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.986 ms 00:24:47.950 [2024-02-14 19:25:25.223740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.235990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.236026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:47.950 [2024-02-14 19:25:25.236040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.202 ms 00:24:47.950 [2024-02-14 19:25:25.236049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.248404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.248440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:47.950 [2024-02-14 19:25:25.248471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.318 ms 00:24:47.950 [2024-02-14 19:25:25.248480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.248949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.248977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:47.950 [2024-02-14 19:25:25.248990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:24:47.950 [2024-02-14 19:25:25.249010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.308852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.308925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:47.950 [2024-02-14 19:25:25.308944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.820 ms 00:24:47.950 [2024-02-14 19:25:25.308953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.318836] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:47.950 [2024-02-14 19:25:25.320732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.320766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:47.950 [2024-02-14 19:25:25.320780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.722 ms 00:24:47.950 [2024-02-14 19:25:25.320790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.320865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.320882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:47.950 [2024-02-14 19:25:25.320893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:47.950 [2024-02-14 19:25:25.320902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.321961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.321998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:47.950 [2024-02-14 19:25:25.322012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:24:47.950 [2024-02-14 19:25:25.322029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.323800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.323832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:47.950 [2024-02-14 19:25:25.323861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:24:47.950 [2024-02-14 19:25:25.323877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.323941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.323955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:47.950 [2024-02-14 19:25:25.323965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:47.950 [2024-02-14 19:25:25.323974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.324014] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:47.950 [2024-02-14 19:25:25.324027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.324036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:47.950 [2024-02-14 19:25:25.324047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:47.950 [2024-02-14 19:25:25.324055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.348280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.348318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:47.950 [2024-02-14 19:25:25.348350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.204 ms 00:24:47.950 [2024-02-14 19:25:25.348367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.348439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.950 [2024-02-14 19:25:25.348456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:47.950 [2024-02-14 19:25:25.348467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:47.950 [2024-02-14 19:25:25.348477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-02-14 19:25:25.355719] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 267.348 ms, result 0 00:25:27.191  Copying: 948/1048576 [kB] (948 kBps) Copying: 5288/1048576 [kB] (4340 kBps) Copying: 30/1024 [MB] (25 MBps) Copying: 58/1024 [MB] (27 MBps) Copying: 86/1024 [MB] (28 MBps) Copying: 113/1024 [MB] (27 MBps) Copying: 141/1024 [MB] (27 MBps) Copying: 168/1024 [MB] (27 MBps) Copying: 196/1024 [MB] (27 MBps) Copying: 223/1024 [MB] (27 MBps) Copying: 250/1024 [MB] (27 MBps) Copying: 278/1024 [MB] (27 MBps) Copying: 306/1024 [MB] (27 MBps) Copying: 334/1024 [MB] (27 MBps) Copying: 361/1024 [MB] (27 MBps) Copying: 389/1024 [MB] (27 MBps) Copying: 416/1024 [MB] (27 MBps) Copying: 444/1024 [MB] (27 MBps) Copying: 472/1024 [MB] (27 MBps) Copying: 500/1024 [MB] (28 MBps) Copying: 528/1024 [MB] (28 MBps) Copying: 556/1024 [MB] (28 MBps) Copying: 584/1024 [MB] (28 MBps) Copying: 612/1024 [MB] (27 MBps) Copying: 640/1024 [MB] (28 MBps) Copying: 668/1024 [MB] (27 MBps) Copying: 695/1024 [MB] (27 MBps) Copying: 724/1024 [MB] (28 MBps) Copying: 753/1024 [MB] (28 MBps) Copying: 781/1024 [MB] (28 MBps) Copying: 810/1024 [MB] (28 MBps) Copying: 838/1024 [MB] (27 MBps) Copying: 865/1024 [MB] (27 MBps) Copying: 893/1024 [MB] (27 MBps) Copying: 920/1024 [MB] (27 MBps) Copying: 948/1024 [MB] (27 MBps) Copying: 975/1024 [MB] (27 MBps) Copying: 1002/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-02-14 19:26:04.314874] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:25:27.191 [2024-02-14 19:26:04.315275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.191 [2024-02-14 19:26:04.315306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:27.191 [2024-02-14 19:26:04.315326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:27.191 [2024-02-14 19:26:04.315338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.191 [2024-02-14 19:26:04.315381] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:27.191 [2024-02-14 19:26:04.318890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.191 [2024-02-14 19:26:04.318929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:27.191 [2024-02-14 19:26:04.318962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.479 ms 00:25:27.191 [2024-02-14 19:26:04.318973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.191 [2024-02-14 19:26:04.319276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.191 [2024-02-14 19:26:04.319302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:27.191 [2024-02-14 19:26:04.319316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:25:27.191 [2024-02-14 19:26:04.319327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.191 [2024-02-14 19:26:04.330628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.191 [2024-02-14 19:26:04.330693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:27.191 [2024-02-14 19:26:04.330722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.278 ms 00:25:27.191 [2024-02-14 19:26:04.330733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.191 [2024-02-14 19:26:04.337159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.191 [2024-02-14 19:26:04.337196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:27.192 [2024-02-14 19:26:04.337210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.383 ms 00:25:27.192 [2024-02-14 19:26:04.337219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.192 [2024-02-14 19:26:04.363918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.192 [2024-02-14 19:26:04.363957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:27.192 [2024-02-14 19:26:04.363973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.632 ms 00:25:27.192 [2024-02-14 19:26:04.363981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.192 [2024-02-14 19:26:04.378284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.192 [2024-02-14 19:26:04.378321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:27.192 [2024-02-14 19:26:04.378343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.265 ms 00:25:27.192 [2024-02-14 19:26:04.378356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.192 [2024-02-14 19:26:04.382357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.192 [2024-02-14 19:26:04.382398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:27.192 [2024-02-14 19:26:04.382444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.975 ms 00:25:27.192 [2024-02-14 19:26:04.382454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.192 [2024-02-14 19:26:04.408881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.192 [2024-02-14 19:26:04.408932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:27.192 [2024-02-14 19:26:04.408963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.407 ms 00:25:27.192 [2024-02-14 19:26:04.408973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.192 [2024-02-14 19:26:04.435369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.192 [2024-02-14 19:26:04.435417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:27.192 [2024-02-14 19:26:04.435449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.342 ms 00:25:27.192 [2024-02-14 19:26:04.435458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.192 [2024-02-14 19:26:04.460631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.192 [2024-02-14 19:26:04.460669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:27.192 [2024-02-14 19:26:04.460683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.104 ms 00:25:27.192 [2024-02-14 19:26:04.460692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.192 [2024-02-14 19:26:04.485574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.192 [2024-02-14 19:26:04.485610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:27.192 [2024-02-14 19:26:04.485642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.786 ms 00:25:27.192 [2024-02-14 19:26:04.485651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.192 [2024-02-14 19:26:04.485689] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:27.192 [2024-02-14 19:26:04.485724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:27.192 [2024-02-14 19:26:04.485737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:25:27.192 [2024-02-14 19:26:04.485748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.485991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:27.192 [2024-02-14 19:26:04.486377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:27.193 [2024-02-14 19:26:04.486820] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:27.193 [2024-02-14 19:26:04.486831] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d995f5ad-0556-475a-9889-bdde60ce73db 00:25:27.193 [2024-02-14 19:26:04.486842] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:25:27.193 [2024-02-14 19:26:04.486851] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136640 00:25:27.193 [2024-02-14 19:26:04.486860] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134656 00:25:27.193 [2024-02-14 19:26:04.486870] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:25:27.193 [2024-02-14 19:26:04.486880] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:27.193 [2024-02-14 19:26:04.486889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:27.193 [2024-02-14 19:26:04.486899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:27.193 [2024-02-14 19:26:04.486907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:27.193 [2024-02-14 19:26:04.486916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:27.193 [2024-02-14 19:26:04.486926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.193 [2024-02-14 19:26:04.486952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:27.193 [2024-02-14 19:26:04.487009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:25:27.193 [2024-02-14 19:26:04.487018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.193 [2024-02-14 19:26:04.500523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.193 [2024-02-14 19:26:04.500550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:27.193 [2024-02-14 19:26:04.500564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.467 ms 00:25:27.193 [2024-02-14 19:26:04.500573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.193 [2024-02-14 19:26:04.500826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.193 [2024-02-14 19:26:04.500851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:27.193 [2024-02-14 19:26:04.500864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:25:27.193 [2024-02-14 19:26:04.500872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.193 [2024-02-14 19:26:04.537841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.193 [2024-02-14 19:26:04.537933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:27.193 [2024-02-14 19:26:04.537950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.193 [2024-02-14 19:26:04.537959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.193 [2024-02-14 19:26:04.538019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.193 [2024-02-14 19:26:04.538033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:27.193 [2024-02-14 19:26:04.538043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.193 [2024-02-14 19:26:04.538052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.193 [2024-02-14 19:26:04.538135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.193 [2024-02-14 19:26:04.538154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:27.193 [2024-02-14 19:26:04.538164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.193 [2024-02-14 19:26:04.538173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.193 [2024-02-14 19:26:04.538193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.193 [2024-02-14 19:26:04.538212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:27.193 [2024-02-14 19:26:04.538222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.193 [2024-02-14 19:26:04.538231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.452 [2024-02-14 19:26:04.618716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.452 [2024-02-14 19:26:04.618766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:27.452 [2024-02-14 19:26:04.618782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.452 [2024-02-14 19:26:04.618791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.452 [2024-02-14 19:26:04.648829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.452 [2024-02-14 19:26:04.648864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:27.452 [2024-02-14 19:26:04.648878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.452 [2024-02-14 19:26:04.648887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.452 [2024-02-14 19:26:04.648970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.452 [2024-02-14 19:26:04.648990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:27.452 [2024-02-14 19:26:04.649001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.452 [2024-02-14 19:26:04.649010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.452 [2024-02-14 19:26:04.649058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.452 [2024-02-14 19:26:04.649072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:27.452 [2024-02-14 19:26:04.649090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.452 [2024-02-14 19:26:04.649099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.452 [2024-02-14 19:26:04.649194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.452 [2024-02-14 19:26:04.649211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:27.452 [2024-02-14 19:26:04.649220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.452 [2024-02-14 19:26:04.649228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.452 [2024-02-14 19:26:04.649282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.452 [2024-02-14 19:26:04.649299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:27.452 [2024-02-14 19:26:04.649309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.452 [2024-02-14 19:26:04.649323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.453 [2024-02-14 19:26:04.649362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.453 [2024-02-14 19:26:04.649375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:27.453 [2024-02-14 19:26:04.649384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.453 [2024-02-14 19:26:04.649393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.453 [2024-02-14 19:26:04.649441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.453 [2024-02-14 19:26:04.649455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:27.453 [2024-02-14 19:26:04.649469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.453 [2024-02-14 19:26:04.649478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.453 [2024-02-14 19:26:04.649701] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 334.357 ms, result 0 00:25:28.388 00:25:28.388 00:25:28.388 19:26:05 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:30.291 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:30.291 19:26:07 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:30.291 [2024-02-14 19:26:07.393383] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:25:30.291 [2024-02-14 19:26:07.393565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78424 ] 00:25:30.291 [2024-02-14 19:26:07.565393] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.551 [2024-02-14 19:26:07.749332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.551 [2024-02-14 19:26:07.749423] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:25:30.811 [2024-02-14 19:26:08.002890] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:30.811 [2024-02-14 19:26:08.002959] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:30.811 [2024-02-14 19:26:08.154769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.811 [2024-02-14 19:26:08.154816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:30.811 [2024-02-14 19:26:08.154853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:30.811 [2024-02-14 19:26:08.154864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.811 [2024-02-14 19:26:08.154921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.811 [2024-02-14 19:26:08.154938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.811 [2024-02-14 19:26:08.154948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:30.811 [2024-02-14 19:26:08.154957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.811 [2024-02-14 19:26:08.154983] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:30.811 [2024-02-14 19:26:08.155825] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:30.811 [2024-02-14 19:26:08.155860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.811 [2024-02-14 19:26:08.155874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.811 [2024-02-14 19:26:08.155885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.883 ms 00:25:30.811 [2024-02-14 19:26:08.155899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.811 [2024-02-14 19:26:08.157022] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:30.811 [2024-02-14 19:26:08.170332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.811 [2024-02-14 19:26:08.170370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:30.811 [2024-02-14 19:26:08.170402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.312 ms 00:25:30.811 [2024-02-14 19:26:08.170412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.811 [2024-02-14 19:26:08.170502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.811 [2024-02-14 19:26:08.170539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:30.811 [2024-02-14 19:26:08.170550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:30.811 [2024-02-14 19:26:08.170560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.811 [2024-02-14 19:26:08.175047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.811 [2024-02-14 19:26:08.175082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.811 [2024-02-14 19:26:08.175110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.384 ms 00:25:30.811 [2024-02-14 19:26:08.175119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.811 [2024-02-14 19:26:08.175213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.811 [2024-02-14 19:26:08.175231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.811 [2024-02-14 19:26:08.175241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:30.811 [2024-02-14 19:26:08.175253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.811 [2024-02-14 19:26:08.175301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.811 [2024-02-14 19:26:08.175316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:30.811 [2024-02-14 19:26:08.175327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:30.811 [2024-02-14 19:26:08.175336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.812 [2024-02-14 19:26:08.175363] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:30.812 [2024-02-14 19:26:08.179116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.812 [2024-02-14 19:26:08.179164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.812 [2024-02-14 19:26:08.179194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.759 ms 00:25:30.812 [2024-02-14 19:26:08.179203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.812 [2024-02-14 19:26:08.179241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.812 [2024-02-14 19:26:08.179254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:30.812 [2024-02-14 19:26:08.179265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:30.812 [2024-02-14 19:26:08.179277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.812 [2024-02-14 19:26:08.179316] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:30.812 [2024-02-14 19:26:08.179343] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:30.812 [2024-02-14 19:26:08.179376] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:30.812 [2024-02-14 19:26:08.179406] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:30.812 [2024-02-14 19:26:08.179539] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:30.812 [2024-02-14 19:26:08.179563] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:30.812 [2024-02-14 19:26:08.179582] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:30.812 [2024-02-14 19:26:08.179596] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:30.812 [2024-02-14 19:26:08.179607] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:30.812 [2024-02-14 19:26:08.179618] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:30.812 [2024-02-14 19:26:08.179627] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:30.812 [2024-02-14 19:26:08.179636] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:30.812 [2024-02-14 19:26:08.179646] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:30.812 [2024-02-14 19:26:08.179657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.812 [2024-02-14 19:26:08.179667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:30.812 [2024-02-14 19:26:08.179678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:25:30.812 [2024-02-14 19:26:08.179687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.812 [2024-02-14 19:26:08.179768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.812 [2024-02-14 19:26:08.179781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:30.812 [2024-02-14 19:26:08.179791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:30.812 [2024-02-14 19:26:08.179800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.812 [2024-02-14 19:26:08.179894] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:30.812 [2024-02-14 19:26:08.179924] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:30.812 [2024-02-14 19:26:08.179935] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.812 [2024-02-14 19:26:08.179945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.812 [2024-02-14 19:26:08.179954] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:30.812 [2024-02-14 19:26:08.179964] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:30.812 [2024-02-14 19:26:08.179973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:30.812 [2024-02-14 19:26:08.179981] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:30.812 [2024-02-14 19:26:08.179990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:30.812 [2024-02-14 19:26:08.179998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.812 [2024-02-14 19:26:08.180007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:30.812 [2024-02-14 19:26:08.180015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:30.812 [2024-02-14 19:26:08.180024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.812 [2024-02-14 19:26:08.180033] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:30.812 [2024-02-14 19:26:08.180042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:30.812 [2024-02-14 19:26:08.180050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.812 [2024-02-14 19:26:08.180058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:30.812 [2024-02-14 19:26:08.180067] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:30.812 [2024-02-14 19:26:08.180075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.812 [2024-02-14 19:26:08.180084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:30.812 [2024-02-14 19:26:08.180105] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:30.812 [2024-02-14 19:26:08.180114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:30.812 [2024-02-14 19:26:08.180123] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:30.812 [2024-02-14 19:26:08.180131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:30.812 [2024-02-14 19:26:08.180140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:30.812 [2024-02-14 19:26:08.180148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:30.812 [2024-02-14 19:26:08.180156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:30.812 [2024-02-14 19:26:08.180165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:30.812 [2024-02-14 19:26:08.180174] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:30.812 [2024-02-14 19:26:08.180182] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:30.812 [2024-02-14 19:26:08.180191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:30.812 [2024-02-14 19:26:08.180199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:30.812 [2024-02-14 19:26:08.180208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:30.812 [2024-02-14 19:26:08.180216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:30.812 [2024-02-14 19:26:08.180224] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:30.812 [2024-02-14 19:26:08.180233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:30.812 [2024-02-14 19:26:08.180241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.812 [2024-02-14 19:26:08.180249] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:30.812 [2024-02-14 19:26:08.180258] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:30.812 [2024-02-14 19:26:08.180266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.812 [2024-02-14 19:26:08.180274] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:30.812 [2024-02-14 19:26:08.180287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:30.812 [2024-02-14 19:26:08.180296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.812 [2024-02-14 19:26:08.180305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.812 [2024-02-14 19:26:08.180315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:30.812 [2024-02-14 19:26:08.180326] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:30.812 [2024-02-14 19:26:08.180335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:30.812 [2024-02-14 19:26:08.180344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:30.812 [2024-02-14 19:26:08.180352] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:30.812 [2024-02-14 19:26:08.180361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:30.812 [2024-02-14 19:26:08.180370] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:30.812 [2024-02-14 19:26:08.180381] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.812 [2024-02-14 19:26:08.180392] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:30.812 [2024-02-14 19:26:08.180402] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:30.812 [2024-02-14 19:26:08.180412] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:30.812 [2024-02-14 19:26:08.180421] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:30.813 [2024-02-14 19:26:08.180430] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:30.813 [2024-02-14 19:26:08.180439] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:30.813 [2024-02-14 19:26:08.180449] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:30.813 [2024-02-14 19:26:08.180458] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:30.813 [2024-02-14 19:26:08.180467] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:30.813 [2024-02-14 19:26:08.180476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:30.813 [2024-02-14 19:26:08.180485] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:30.813 [2024-02-14 19:26:08.180495] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:30.813 [2024-02-14 19:26:08.180504] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:30.813 [2024-02-14 19:26:08.180513] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:30.813 [2024-02-14 19:26:08.180524] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.813 [2024-02-14 19:26:08.180534] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:30.813 [2024-02-14 19:26:08.180564] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:30.813 [2024-02-14 19:26:08.180577] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:30.813 [2024-02-14 19:26:08.180587] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:30.813 [2024-02-14 19:26:08.180598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.813 [2024-02-14 19:26:08.180608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:30.813 [2024-02-14 19:26:08.180618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:25:30.813 [2024-02-14 19:26:08.180632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.813 [2024-02-14 19:26:08.196244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.813 [2024-02-14 19:26:08.196287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:30.813 [2024-02-14 19:26:08.196319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.547 ms 00:25:30.813 [2024-02-14 19:26:08.196333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.813 [2024-02-14 19:26:08.196416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.813 [2024-02-14 19:26:08.196430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:30.813 [2024-02-14 19:26:08.196440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:30.813 [2024-02-14 19:26:08.196449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.072 [2024-02-14 19:26:08.237843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.072 [2024-02-14 19:26:08.237934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.072 [2024-02-14 19:26:08.237968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.298 ms 00:25:31.072 [2024-02-14 19:26:08.237980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.072 [2024-02-14 19:26:08.238037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.238053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.073 [2024-02-14 19:26:08.238065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:31.073 [2024-02-14 19:26:08.238076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.238490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.238522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.073 [2024-02-14 19:26:08.238538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:25:31.073 [2024-02-14 19:26:08.238548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.238708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.238727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.073 [2024-02-14 19:26:08.238738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:25:31.073 [2024-02-14 19:26:08.238747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.252983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.253019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:31.073 [2024-02-14 19:26:08.253049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.211 ms 00:25:31.073 [2024-02-14 19:26:08.253058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.266388] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:31.073 [2024-02-14 19:26:08.266424] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:31.073 [2024-02-14 19:26:08.266455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.266465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:31.073 [2024-02-14 19:26:08.266475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.289 ms 00:25:31.073 [2024-02-14 19:26:08.266484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.290740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.290792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:31.073 [2024-02-14 19:26:08.290825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.186 ms 00:25:31.073 [2024-02-14 19:26:08.290834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.304045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.304082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:31.073 [2024-02-14 19:26:08.304112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.145 ms 00:25:31.073 [2024-02-14 19:26:08.304121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.317425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.317461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:31.073 [2024-02-14 19:26:08.317507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.264 ms 00:25:31.073 [2024-02-14 19:26:08.317529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.318024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.318062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:31.073 [2024-02-14 19:26:08.318077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:25:31.073 [2024-02-14 19:26:08.318088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.376759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.376818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:31.073 [2024-02-14 19:26:08.376836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.648 ms 00:25:31.073 [2024-02-14 19:26:08.376846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.386682] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:31.073 [2024-02-14 19:26:08.388618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.388647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:31.073 [2024-02-14 19:26:08.388681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.722 ms 00:25:31.073 [2024-02-14 19:26:08.388690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.388769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.388786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:31.073 [2024-02-14 19:26:08.388797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:31.073 [2024-02-14 19:26:08.388806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.389372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.389394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:31.073 [2024-02-14 19:26:08.389405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:25:31.073 [2024-02-14 19:26:08.389419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.391147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.391180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:31.073 [2024-02-14 19:26:08.391208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.706 ms 00:25:31.073 [2024-02-14 19:26:08.391217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.391257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.391271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:31.073 [2024-02-14 19:26:08.391280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:31.073 [2024-02-14 19:26:08.391301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.391354] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:31.073 [2024-02-14 19:26:08.391370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.391379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:31.073 [2024-02-14 19:26:08.391388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:31.073 [2024-02-14 19:26:08.391398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.416408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.416445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:31.073 [2024-02-14 19:26:08.416461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.986 ms 00:25:31.073 [2024-02-14 19:26:08.416513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.416602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.073 [2024-02-14 19:26:08.416634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:31.073 [2024-02-14 19:26:08.416646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:31.073 [2024-02-14 19:26:08.416656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.073 [2024-02-14 19:26:08.418089] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 262.720 ms, result 0 00:26:16.729  Copying: 24/1024 [MB] (24 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (22 MBps) Copying: 93/1024 [MB] (22 MBps) Copying: 116/1024 [MB] (22 MBps) Copying: 138/1024 [MB] (22 MBps) Copying: 161/1024 [MB] (22 MBps) Copying: 183/1024 [MB] (22 MBps) Copying: 206/1024 [MB] (22 MBps) Copying: 229/1024 [MB] (22 MBps) Copying: 251/1024 [MB] (22 MBps) Copying: 273/1024 [MB] (22 MBps) Copying: 295/1024 [MB] (22 MBps) Copying: 318/1024 [MB] (22 MBps) Copying: 341/1024 [MB] (22 MBps) Copying: 363/1024 [MB] (22 MBps) Copying: 387/1024 [MB] (23 MBps) Copying: 409/1024 [MB] (22 MBps) Copying: 432/1024 [MB] (22 MBps) Copying: 454/1024 [MB] (22 MBps) Copying: 477/1024 [MB] (22 MBps) Copying: 500/1024 [MB] (22 MBps) Copying: 522/1024 [MB] (22 MBps) Copying: 544/1024 [MB] (22 MBps) Copying: 567/1024 [MB] (22 MBps) Copying: 589/1024 [MB] (22 MBps) Copying: 611/1024 [MB] (22 MBps) Copying: 634/1024 [MB] (23 MBps) Copying: 657/1024 [MB] (23 MBps) Copying: 680/1024 [MB] (22 MBps) Copying: 702/1024 [MB] (22 MBps) Copying: 725/1024 [MB] (22 MBps) Copying: 748/1024 [MB] (22 MBps) Copying: 770/1024 [MB] (22 MBps) Copying: 792/1024 [MB] (22 MBps) Copying: 814/1024 [MB] (22 MBps) Copying: 837/1024 [MB] (22 MBps) Copying: 859/1024 [MB] (22 MBps) Copying: 882/1024 [MB] (22 MBps) Copying: 904/1024 [MB] (22 MBps) Copying: 926/1024 [MB] (22 MBps) Copying: 948/1024 [MB] (22 MBps) Copying: 970/1024 [MB] (22 MBps) Copying: 994/1024 [MB] (23 MBps) Copying: 1017/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-02-14 19:26:54.119555] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:16.729 [2024-02-14 19:26:54.119887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.729 [2024-02-14 19:26:54.119917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:16.729 [2024-02-14 19:26:54.119942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:16.729 [2024-02-14 19:26:54.119958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.729 [2024-02-14 19:26:54.120012] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:16.729 [2024-02-14 19:26:54.125108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.729 [2024-02-14 19:26:54.125147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:16.729 [2024-02-14 19:26:54.125170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.066 ms 00:26:16.729 [2024-02-14 19:26:54.125188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.729 [2024-02-14 19:26:54.125475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.729 [2024-02-14 19:26:54.125579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:16.729 [2024-02-14 19:26:54.125602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:26:16.729 [2024-02-14 19:26:54.125622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.729 [2024-02-14 19:26:54.128541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.729 [2024-02-14 19:26:54.128575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:16.729 [2024-02-14 19:26:54.128597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.870 ms 00:26:16.729 [2024-02-14 19:26:54.128614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.729 [2024-02-14 19:26:54.133916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.729 [2024-02-14 19:26:54.133951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:16.729 [2024-02-14 19:26:54.133973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.269 ms 00:26:16.729 [2024-02-14 19:26:54.133990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.989 [2024-02-14 19:26:54.159958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.989 [2024-02-14 19:26:54.159995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:16.989 [2024-02-14 19:26:54.160018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.873 ms 00:26:16.989 [2024-02-14 19:26:54.160035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.989 [2024-02-14 19:26:54.174789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.989 [2024-02-14 19:26:54.174831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:16.989 [2024-02-14 19:26:54.174854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.702 ms 00:26:16.989 [2024-02-14 19:26:54.174871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.989 [2024-02-14 19:26:54.178695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.989 [2024-02-14 19:26:54.178738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:16.989 [2024-02-14 19:26:54.178762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.788 ms 00:26:16.989 [2024-02-14 19:26:54.178779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.989 [2024-02-14 19:26:54.203507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.989 [2024-02-14 19:26:54.203543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:16.989 [2024-02-14 19:26:54.203566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.682 ms 00:26:16.989 [2024-02-14 19:26:54.203582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.989 [2024-02-14 19:26:54.227766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.989 [2024-02-14 19:26:54.227803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:16.989 [2024-02-14 19:26:54.227826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.133 ms 00:26:16.989 [2024-02-14 19:26:54.227842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.989 [2024-02-14 19:26:54.252084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.989 [2024-02-14 19:26:54.252131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:16.989 [2024-02-14 19:26:54.252155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.189 ms 00:26:16.989 [2024-02-14 19:26:54.252171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.989 [2024-02-14 19:26:54.276102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.989 [2024-02-14 19:26:54.276140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:16.989 [2024-02-14 19:26:54.276162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.827 ms 00:26:16.989 [2024-02-14 19:26:54.276178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.989 [2024-02-14 19:26:54.276228] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:16.989 [2024-02-14 19:26:54.276264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:16.989 [2024-02-14 19:26:54.276286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:26:16.989 [2024-02-14 19:26:54.276302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:16.989 [2024-02-14 19:26:54.276318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:16.989 [2024-02-14 19:26:54.276332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:16.989 [2024-02-14 19:26:54.276346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:16.989 [2024-02-14 19:26:54.276361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:16.989 [2024-02-14 19:26:54.276378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:16.989 [2024-02-14 19:26:54.276394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:16.989 [2024-02-14 19:26:54.276409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:16.989 [2024-02-14 19:26:54.276424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:16.989 [2024-02-14 19:26:54.276440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:16.989 [2024-02-14 19:26:54.276454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:16.989 [2024-02-14 19:26:54.276469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.276983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.277995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.278011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.278029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:16.990 [2024-02-14 19:26:54.278047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:16.991 [2024-02-14 19:26:54.278064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:16.991 [2024-02-14 19:26:54.278092] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:16.991 [2024-02-14 19:26:54.278110] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d995f5ad-0556-475a-9889-bdde60ce73db 00:26:16.991 [2024-02-14 19:26:54.278129] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:26:16.991 [2024-02-14 19:26:54.278146] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:16.991 [2024-02-14 19:26:54.278162] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:16.991 [2024-02-14 19:26:54.278190] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:16.991 [2024-02-14 19:26:54.278221] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:16.991 [2024-02-14 19:26:54.278238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:16.991 [2024-02-14 19:26:54.278262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:16.991 [2024-02-14 19:26:54.278278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:16.991 [2024-02-14 19:26:54.278294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:16.991 [2024-02-14 19:26:54.278311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.991 [2024-02-14 19:26:54.278354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:16.991 [2024-02-14 19:26:54.278373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:26:16.991 [2024-02-14 19:26:54.278389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.991 [2024-02-14 19:26:54.292803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.991 [2024-02-14 19:26:54.292839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:16.991 [2024-02-14 19:26:54.292863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.361 ms 00:26:16.991 [2024-02-14 19:26:54.292888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.991 [2024-02-14 19:26:54.293173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.991 [2024-02-14 19:26:54.293225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:16.991 [2024-02-14 19:26:54.293248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:26:16.991 [2024-02-14 19:26:54.293266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.991 [2024-02-14 19:26:54.328446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.991 [2024-02-14 19:26:54.328505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:16.991 [2024-02-14 19:26:54.328553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.991 [2024-02-14 19:26:54.328570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.991 [2024-02-14 19:26:54.328661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.991 [2024-02-14 19:26:54.328683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:16.991 [2024-02-14 19:26:54.328701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.991 [2024-02-14 19:26:54.328718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.991 [2024-02-14 19:26:54.328827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.991 [2024-02-14 19:26:54.328887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:16.991 [2024-02-14 19:26:54.328905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.991 [2024-02-14 19:26:54.328945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.991 [2024-02-14 19:26:54.328978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.991 [2024-02-14 19:26:54.329012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:16.991 [2024-02-14 19:26:54.329030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.991 [2024-02-14 19:26:54.329046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.250 [2024-02-14 19:26:54.409757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.250 [2024-02-14 19:26:54.409806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.250 [2024-02-14 19:26:54.409846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.250 [2024-02-14 19:26:54.409861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.250 [2024-02-14 19:26:54.441702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.250 [2024-02-14 19:26:54.441740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.250 [2024-02-14 19:26:54.441762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.250 [2024-02-14 19:26:54.441779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.250 [2024-02-14 19:26:54.441901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.250 [2024-02-14 19:26:54.441943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.250 [2024-02-14 19:26:54.441961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.250 [2024-02-14 19:26:54.441978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.250 [2024-02-14 19:26:54.442063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.250 [2024-02-14 19:26:54.442089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.250 [2024-02-14 19:26:54.442106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.250 [2024-02-14 19:26:54.442121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.250 [2024-02-14 19:26:54.442291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.250 [2024-02-14 19:26:54.442315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.250 [2024-02-14 19:26:54.442332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.250 [2024-02-14 19:26:54.442347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.250 [2024-02-14 19:26:54.442414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.250 [2024-02-14 19:26:54.442436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:17.250 [2024-02-14 19:26:54.442453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.250 [2024-02-14 19:26:54.442468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.250 [2024-02-14 19:26:54.442523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.250 [2024-02-14 19:26:54.442587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.250 [2024-02-14 19:26:54.442626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.250 [2024-02-14 19:26:54.442643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.250 [2024-02-14 19:26:54.442744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.250 [2024-02-14 19:26:54.442771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.250 [2024-02-14 19:26:54.442790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.250 [2024-02-14 19:26:54.442804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.250 [2024-02-14 19:26:54.443059] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 323.085 ms, result 0 00:26:18.187 00:26:18.187 00:26:18.187 19:26:55 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:20.090 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:20.090 19:26:57 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:20.090 19:26:57 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:20.090 19:26:57 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:20.091 19:26:57 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:20.091 19:26:57 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:20.091 19:26:57 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:20.091 19:26:57 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:20.091 Process with pid 76507 is not found 00:26:20.091 19:26:57 -- ftl/dirty_shutdown.sh@37 -- # killprocess 76507 00:26:20.091 19:26:57 -- common/autotest_common.sh@924 -- # '[' -z 76507 ']' 00:26:20.091 19:26:57 -- common/autotest_common.sh@928 -- # kill -0 76507 00:26:20.091 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (76507) - No such process 00:26:20.091 19:26:57 -- common/autotest_common.sh@951 -- # echo 'Process with pid 76507 is not found' 00:26:20.091 19:26:57 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:20.350 19:26:57 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:20.350 19:26:57 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:20.350 Remove shared memory files 00:26:20.350 19:26:57 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:20.350 19:26:57 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:20.350 19:26:57 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:20.350 19:26:57 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:20.350 19:26:57 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:20.350 ************************************ 00:26:20.350 END TEST ftl_dirty_shutdown 00:26:20.350 ************************************ 00:26:20.350 00:26:20.350 real 3m55.336s 00:26:20.350 user 4m31.791s 00:26:20.350 sys 0m34.552s 00:26:20.350 19:26:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:20.350 19:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:20.350 19:26:57 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:20.350 19:26:57 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:26:20.350 19:26:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:20.350 19:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:20.350 ************************************ 00:26:20.350 START TEST ftl_upgrade_shutdown 00:26:20.350 ************************************ 00:26:20.350 19:26:57 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:20.350 * Looking for test storage... 00:26:20.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:20.350 19:26:57 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:20.610 19:26:57 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:20.610 19:26:57 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:20.610 19:26:57 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:20.610 19:26:57 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:20.610 19:26:57 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:20.610 19:26:57 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:20.610 19:26:57 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:20.610 19:26:57 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:20.610 19:26:57 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.610 19:26:57 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.610 19:26:57 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:20.610 19:26:57 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:20.610 19:26:57 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:20.610 19:26:57 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:20.610 19:26:57 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:20.610 19:26:57 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:20.610 19:26:57 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.610 19:26:57 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.610 19:26:57 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:20.610 19:26:57 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:20.610 19:26:57 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:20.610 19:26:57 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:20.610 19:26:57 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:20.610 19:26:57 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:20.610 19:26:57 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:20.610 19:26:57 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:20.610 19:26:57 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:20.610 19:26:57 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:20.610 19:26:57 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:20.610 19:26:57 -- ftl/common.sh@81 -- # local base_bdev= 00:26:20.610 19:26:57 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:20.610 19:26:57 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:20.610 19:26:57 -- ftl/common.sh@89 -- # spdk_tgt_pid=78978 00:26:20.610 19:26:57 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:20.610 19:26:57 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:20.610 19:26:57 -- ftl/common.sh@91 -- # waitforlisten 78978 00:26:20.610 19:26:57 -- common/autotest_common.sh@817 -- # '[' -z 78978 ']' 00:26:20.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.610 19:26:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.610 19:26:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:20.610 19:26:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.610 19:26:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:20.610 19:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:20.610 [2024-02-14 19:26:57.906902] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:26:20.610 [2024-02-14 19:26:57.907069] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78978 ] 00:26:20.869 [2024-02-14 19:26:58.080806] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.128 [2024-02-14 19:26:58.306600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:21.128 [2024-02-14 19:26:58.306875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.066 19:26:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:22.066 19:26:59 -- common/autotest_common.sh@850 -- # return 0 00:26:22.066 19:26:59 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:22.066 19:26:59 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:22.066 19:26:59 -- ftl/common.sh@99 -- # local params 00:26:22.066 19:26:59 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:22.066 19:26:59 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:22.066 19:26:59 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:22.066 19:26:59 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:26:22.066 19:26:59 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:22.066 19:26:59 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:22.066 19:26:59 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:22.066 19:26:59 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:26:22.066 19:26:59 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:22.066 19:26:59 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:22.066 19:26:59 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:22.066 19:26:59 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:22.066 19:26:59 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:26:22.066 19:26:59 -- ftl/common.sh@54 -- # local name=base 00:26:22.066 19:26:59 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:26:22.066 19:26:59 -- ftl/common.sh@56 -- # local size=20480 00:26:22.066 19:26:59 -- ftl/common.sh@59 -- # local base_bdev 00:26:22.066 19:26:59 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:26:22.634 19:26:59 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:22.634 19:26:59 -- ftl/common.sh@62 -- # local base_size 00:26:22.634 19:26:59 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:22.634 19:26:59 -- common/autotest_common.sh@1355 -- # local bdev_name=basen1 00:26:22.634 19:26:59 -- common/autotest_common.sh@1356 -- # local bdev_info 00:26:22.634 19:26:59 -- common/autotest_common.sh@1357 -- # local bs 00:26:22.634 19:26:59 -- common/autotest_common.sh@1358 -- # local nb 00:26:22.634 19:26:59 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:22.893 19:27:00 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:26:22.893 { 00:26:22.893 "name": "basen1", 00:26:22.893 "aliases": [ 00:26:22.893 "3b8c56ce-5c91-4889-b4c9-221982213e95" 00:26:22.893 ], 00:26:22.893 "product_name": "NVMe disk", 00:26:22.893 "block_size": 4096, 00:26:22.893 "num_blocks": 1310720, 00:26:22.893 "uuid": "3b8c56ce-5c91-4889-b4c9-221982213e95", 00:26:22.893 "assigned_rate_limits": { 00:26:22.893 "rw_ios_per_sec": 0, 00:26:22.893 "rw_mbytes_per_sec": 0, 00:26:22.893 "r_mbytes_per_sec": 0, 00:26:22.893 "w_mbytes_per_sec": 0 00:26:22.893 }, 00:26:22.893 "claimed": true, 00:26:22.893 "claim_type": "read_many_write_one", 00:26:22.893 "zoned": false, 00:26:22.893 "supported_io_types": { 00:26:22.893 "read": true, 00:26:22.893 "write": true, 00:26:22.893 "unmap": true, 00:26:22.893 "write_zeroes": true, 00:26:22.893 "flush": true, 00:26:22.893 "reset": true, 00:26:22.893 "compare": true, 00:26:22.893 "compare_and_write": false, 00:26:22.893 "abort": true, 00:26:22.893 "nvme_admin": true, 00:26:22.893 "nvme_io": true 00:26:22.893 }, 00:26:22.893 "driver_specific": { 00:26:22.893 "nvme": [ 00:26:22.893 { 00:26:22.893 "pci_address": "0000:00:07.0", 00:26:22.893 "trid": { 00:26:22.893 "trtype": "PCIe", 00:26:22.893 "traddr": "0000:00:07.0" 00:26:22.893 }, 00:26:22.893 "ctrlr_data": { 00:26:22.893 "cntlid": 0, 00:26:22.893 "vendor_id": "0x1b36", 00:26:22.893 "model_number": "QEMU NVMe Ctrl", 00:26:22.893 "serial_number": "12341", 00:26:22.893 "firmware_revision": "8.0.0", 00:26:22.893 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:22.893 "oacs": { 00:26:22.893 "security": 0, 00:26:22.893 "format": 1, 00:26:22.893 "firmware": 0, 00:26:22.893 "ns_manage": 1 00:26:22.893 }, 00:26:22.893 "multi_ctrlr": false, 00:26:22.893 "ana_reporting": false 00:26:22.893 }, 00:26:22.893 "vs": { 00:26:22.893 "nvme_version": "1.4" 00:26:22.893 }, 00:26:22.893 "ns_data": { 00:26:22.893 "id": 1, 00:26:22.893 "can_share": false 00:26:22.893 } 00:26:22.893 } 00:26:22.893 ], 00:26:22.893 "mp_policy": "active_passive" 00:26:22.893 } 00:26:22.893 } 00:26:22.893 ]' 00:26:22.893 19:27:00 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:26:22.893 19:27:00 -- common/autotest_common.sh@1360 -- # bs=4096 00:26:22.893 19:27:00 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:26:22.893 19:27:00 -- common/autotest_common.sh@1361 -- # nb=1310720 00:26:22.893 19:27:00 -- common/autotest_common.sh@1364 -- # bdev_size=5120 00:26:22.893 19:27:00 -- common/autotest_common.sh@1365 -- # echo 5120 00:26:22.893 19:27:00 -- ftl/common.sh@63 -- # base_size=5120 00:26:22.893 19:27:00 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:22.893 19:27:00 -- ftl/common.sh@67 -- # clear_lvols 00:26:22.893 19:27:00 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:22.893 19:27:00 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:23.153 19:27:00 -- ftl/common.sh@28 -- # stores=612c6e47-eb96-4989-9a65-7fabe7788283 00:26:23.153 19:27:00 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:23.153 19:27:00 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 612c6e47-eb96-4989-9a65-7fabe7788283 00:26:23.411 19:27:00 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:23.671 19:27:00 -- ftl/common.sh@68 -- # lvs=e278df3b-4076-40c7-8f78-972aceccf7a9 00:26:23.671 19:27:00 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u e278df3b-4076-40c7-8f78-972aceccf7a9 00:26:23.671 19:27:01 -- ftl/common.sh@107 -- # base_bdev=54ae10bb-e5b9-4311-8277-ca832b99632c 00:26:23.671 19:27:01 -- ftl/common.sh@108 -- # [[ -z 54ae10bb-e5b9-4311-8277-ca832b99632c ]] 00:26:23.671 19:27:01 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 54ae10bb-e5b9-4311-8277-ca832b99632c 5120 00:26:23.671 19:27:01 -- ftl/common.sh@35 -- # local name=cache 00:26:23.671 19:27:01 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:23.671 19:27:01 -- ftl/common.sh@37 -- # local base_bdev=54ae10bb-e5b9-4311-8277-ca832b99632c 00:26:23.671 19:27:01 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:23.671 19:27:01 -- ftl/common.sh@41 -- # get_bdev_size 54ae10bb-e5b9-4311-8277-ca832b99632c 00:26:23.671 19:27:01 -- common/autotest_common.sh@1355 -- # local bdev_name=54ae10bb-e5b9-4311-8277-ca832b99632c 00:26:23.671 19:27:01 -- common/autotest_common.sh@1356 -- # local bdev_info 00:26:23.671 19:27:01 -- common/autotest_common.sh@1357 -- # local bs 00:26:23.671 19:27:01 -- common/autotest_common.sh@1358 -- # local nb 00:26:23.671 19:27:01 -- common/autotest_common.sh@1359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54ae10bb-e5b9-4311-8277-ca832b99632c 00:26:23.930 19:27:01 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:26:23.930 { 00:26:23.930 "name": "54ae10bb-e5b9-4311-8277-ca832b99632c", 00:26:23.930 "aliases": [ 00:26:23.930 "lvs/basen1p0" 00:26:23.930 ], 00:26:23.930 "product_name": "Logical Volume", 00:26:23.930 "block_size": 4096, 00:26:23.930 "num_blocks": 5242880, 00:26:23.930 "uuid": "54ae10bb-e5b9-4311-8277-ca832b99632c", 00:26:23.930 "assigned_rate_limits": { 00:26:23.930 "rw_ios_per_sec": 0, 00:26:23.930 "rw_mbytes_per_sec": 0, 00:26:23.930 "r_mbytes_per_sec": 0, 00:26:23.930 "w_mbytes_per_sec": 0 00:26:23.930 }, 00:26:23.930 "claimed": false, 00:26:23.930 "zoned": false, 00:26:23.930 "supported_io_types": { 00:26:23.930 "read": true, 00:26:23.930 "write": true, 00:26:23.930 "unmap": true, 00:26:23.930 "write_zeroes": true, 00:26:23.930 "flush": false, 00:26:23.930 "reset": true, 00:26:23.930 "compare": false, 00:26:23.930 "compare_and_write": false, 00:26:23.930 "abort": false, 00:26:23.930 "nvme_admin": false, 00:26:23.930 "nvme_io": false 00:26:23.930 }, 00:26:23.930 "driver_specific": { 00:26:23.930 "lvol": { 00:26:23.930 "lvol_store_uuid": "e278df3b-4076-40c7-8f78-972aceccf7a9", 00:26:23.930 "base_bdev": "basen1", 00:26:23.930 "thin_provision": true, 00:26:23.930 "snapshot": false, 00:26:23.930 "clone": false, 00:26:23.930 "esnap_clone": false 00:26:23.930 } 00:26:23.930 } 00:26:23.930 } 00:26:23.930 ]' 00:26:23.930 19:27:01 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:26:24.189 19:27:01 -- common/autotest_common.sh@1360 -- # bs=4096 00:26:24.189 19:27:01 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:26:24.189 19:27:01 -- common/autotest_common.sh@1361 -- # nb=5242880 00:26:24.189 19:27:01 -- common/autotest_common.sh@1364 -- # bdev_size=20480 00:26:24.189 19:27:01 -- common/autotest_common.sh@1365 -- # echo 20480 00:26:24.189 19:27:01 -- ftl/common.sh@41 -- # local base_size=1024 00:26:24.189 19:27:01 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:24.189 19:27:01 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:24.448 19:27:01 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:24.448 19:27:01 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:24.448 19:27:01 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:24.709 19:27:01 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:24.709 19:27:01 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:24.709 19:27:01 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 54ae10bb-e5b9-4311-8277-ca832b99632c -c cachen1p0 --l2p_dram_limit 2 00:26:24.709 [2024-02-14 19:27:02.097180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.709 [2024-02-14 19:27:02.097231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:24.709 [2024-02-14 19:27:02.097268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:24.709 [2024-02-14 19:27:02.097278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.709 [2024-02-14 19:27:02.097366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.709 [2024-02-14 19:27:02.097384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:24.709 [2024-02-14 19:27:02.097397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:26:24.709 [2024-02-14 19:27:02.097407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.709 [2024-02-14 19:27:02.097436] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:24.709 [2024-02-14 19:27:02.098503] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:24.709 [2024-02-14 19:27:02.098552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.709 [2024-02-14 19:27:02.098564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:24.709 [2024-02-14 19:27:02.098577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.121 ms 00:26:24.709 [2024-02-14 19:27:02.098587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.709 [2024-02-14 19:27:02.098712] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 764770a3-dba9-4cc1-86e9-4022bdfc4b9a 00:26:24.709 [2024-02-14 19:27:02.099658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.709 [2024-02-14 19:27:02.099701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:24.709 [2024-02-14 19:27:02.099717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:26:24.709 [2024-02-14 19:27:02.099744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.709 [2024-02-14 19:27:02.104037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.709 [2024-02-14 19:27:02.104092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:24.709 [2024-02-14 19:27:02.104107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.244 ms 00:26:24.709 [2024-02-14 19:27:02.104118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.709 [2024-02-14 19:27:02.104170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.709 [2024-02-14 19:27:02.104188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:24.709 [2024-02-14 19:27:02.104198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:24.709 [2024-02-14 19:27:02.104211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.709 [2024-02-14 19:27:02.104277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.709 [2024-02-14 19:27:02.104297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:24.709 [2024-02-14 19:27:02.104310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:24.709 [2024-02-14 19:27:02.104321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.709 [2024-02-14 19:27:02.104348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:24.709 [2024-02-14 19:27:02.108385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.709 [2024-02-14 19:27:02.108419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:24.709 [2024-02-14 19:27:02.108452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.040 ms 00:26:24.709 [2024-02-14 19:27:02.108462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.709 [2024-02-14 19:27:02.108497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.709 [2024-02-14 19:27:02.108525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:24.709 [2024-02-14 19:27:02.108539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:24.709 [2024-02-14 19:27:02.108548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.709 [2024-02-14 19:27:02.108596] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:24.709 [2024-02-14 19:27:02.108728] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:24.709 [2024-02-14 19:27:02.108748] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:24.709 [2024-02-14 19:27:02.108761] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:24.709 [2024-02-14 19:27:02.108775] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:24.709 [2024-02-14 19:27:02.108785] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:24.709 [2024-02-14 19:27:02.108797] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:24.709 [2024-02-14 19:27:02.108808] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:24.709 [2024-02-14 19:27:02.108818] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:24.709 [2024-02-14 19:27:02.108827] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:24.709 [2024-02-14 19:27:02.108840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.709 [2024-02-14 19:27:02.108850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:24.709 [2024-02-14 19:27:02.108861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.245 ms 00:26:24.709 [2024-02-14 19:27:02.108870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.709 [2024-02-14 19:27:02.108944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.709 [2024-02-14 19:27:02.108957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:24.709 [2024-02-14 19:27:02.108969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:24.709 [2024-02-14 19:27:02.108979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.709 [2024-02-14 19:27:02.109053] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:24.709 [2024-02-14 19:27:02.109067] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:24.709 [2024-02-14 19:27:02.109080] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:24.709 [2024-02-14 19:27:02.109089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.709 [2024-02-14 19:27:02.109100] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:24.709 [2024-02-14 19:27:02.109109] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:24.709 [2024-02-14 19:27:02.109120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:24.710 [2024-02-14 19:27:02.109129] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:24.710 [2024-02-14 19:27:02.109139] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:24.710 [2024-02-14 19:27:02.109147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.710 [2024-02-14 19:27:02.109157] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:24.710 [2024-02-14 19:27:02.109167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:24.710 [2024-02-14 19:27:02.109177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.710 [2024-02-14 19:27:02.109186] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:24.710 [2024-02-14 19:27:02.109196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:24.710 [2024-02-14 19:27:02.109204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.710 [2024-02-14 19:27:02.109218] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:24.710 [2024-02-14 19:27:02.109226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:24.710 [2024-02-14 19:27:02.109236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.710 [2024-02-14 19:27:02.109245] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:24.710 [2024-02-14 19:27:02.109257] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:24.710 [2024-02-14 19:27:02.109265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:24.710 [2024-02-14 19:27:02.109276] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:24.710 [2024-02-14 19:27:02.109285] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:24.710 [2024-02-14 19:27:02.109294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:24.710 [2024-02-14 19:27:02.109303] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:24.710 [2024-02-14 19:27:02.109313] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:24.710 [2024-02-14 19:27:02.109321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:24.710 [2024-02-14 19:27:02.109331] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:24.710 [2024-02-14 19:27:02.109340] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:24.710 [2024-02-14 19:27:02.109350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:24.710 [2024-02-14 19:27:02.109358] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:24.710 [2024-02-14 19:27:02.109370] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:24.710 [2024-02-14 19:27:02.109379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:24.710 [2024-02-14 19:27:02.109388] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:24.710 [2024-02-14 19:27:02.109397] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:24.710 [2024-02-14 19:27:02.109406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.710 [2024-02-14 19:27:02.109415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:24.710 [2024-02-14 19:27:02.109425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:24.710 [2024-02-14 19:27:02.109433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.710 [2024-02-14 19:27:02.109443] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:24.710 [2024-02-14 19:27:02.109452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:24.710 [2024-02-14 19:27:02.109465] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:24.710 [2024-02-14 19:27:02.109474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.710 [2024-02-14 19:27:02.109498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:24.710 [2024-02-14 19:27:02.109510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:24.710 [2024-02-14 19:27:02.109520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:24.710 [2024-02-14 19:27:02.109530] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:24.710 [2024-02-14 19:27:02.109541] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:24.710 [2024-02-14 19:27:02.109550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:24.710 [2024-02-14 19:27:02.109564] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:24.710 [2024-02-14 19:27:02.109578] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.710 [2024-02-14 19:27:02.109590] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:24.710 [2024-02-14 19:27:02.109600] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:24.710 [2024-02-14 19:27:02.109611] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:24.710 [2024-02-14 19:27:02.109620] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:24.710 [2024-02-14 19:27:02.109631] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:24.710 [2024-02-14 19:27:02.109640] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:24.710 [2024-02-14 19:27:02.109651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:24.710 [2024-02-14 19:27:02.109660] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:24.710 [2024-02-14 19:27:02.109671] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:24.710 [2024-02-14 19:27:02.109680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:24.710 [2024-02-14 19:27:02.109691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:24.710 [2024-02-14 19:27:02.109701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:24.710 [2024-02-14 19:27:02.109716] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:24.710 [2024-02-14 19:27:02.109726] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:24.710 [2024-02-14 19:27:02.109738] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.710 [2024-02-14 19:27:02.109748] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:24.710 [2024-02-14 19:27:02.109759] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:24.710 [2024-02-14 19:27:02.109768] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:24.710 [2024-02-14 19:27:02.109779] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:24.710 [2024-02-14 19:27:02.109790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.710 [2024-02-14 19:27:02.109800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:24.710 [2024-02-14 19:27:02.109810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.776 ms 00:26:24.710 [2024-02-14 19:27:02.109820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.970 [2024-02-14 19:27:02.126879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.970 [2024-02-14 19:27:02.127132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:24.970 [2024-02-14 19:27:02.127284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.012 ms 00:26:24.970 [2024-02-14 19:27:02.127361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.970 [2024-02-14 19:27:02.127612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.970 [2024-02-14 19:27:02.127769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:24.970 [2024-02-14 19:27:02.127895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:24.970 [2024-02-14 19:27:02.127965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.970 [2024-02-14 19:27:02.163322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.970 [2024-02-14 19:27:02.163562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:24.970 [2024-02-14 19:27:02.163724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 35.192 ms 00:26:24.970 [2024-02-14 19:27:02.163780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.970 [2024-02-14 19:27:02.163851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.970 [2024-02-14 19:27:02.163962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:24.970 [2024-02-14 19:27:02.164019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:24.970 [2024-02-14 19:27:02.164055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.970 [2024-02-14 19:27:02.164491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.970 [2024-02-14 19:27:02.164757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:24.970 [2024-02-14 19:27:02.164872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.340 ms 00:26:24.970 [2024-02-14 19:27:02.164954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.970 [2024-02-14 19:27:02.165131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.970 [2024-02-14 19:27:02.165195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:24.970 [2024-02-14 19:27:02.165374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:24.970 [2024-02-14 19:27:02.165430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.970 [2024-02-14 19:27:02.181907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.970 [2024-02-14 19:27:02.182089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:24.970 [2024-02-14 19:27:02.182227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.391 ms 00:26:24.970 [2024-02-14 19:27:02.182311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.970 [2024-02-14 19:27:02.195316] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:24.970 [2024-02-14 19:27:02.196348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.970 [2024-02-14 19:27:02.196561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:24.970 [2024-02-14 19:27:02.196689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.772 ms 00:26:24.970 [2024-02-14 19:27:02.196738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.970 [2024-02-14 19:27:02.222952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.970 [2024-02-14 19:27:02.223161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:24.971 [2024-02-14 19:27:02.223304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.152 ms 00:26:24.971 [2024-02-14 19:27:02.223352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.971 [2024-02-14 19:27:02.223434] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:24.971 [2024-02-14 19:27:02.223657] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:27.505 [2024-02-14 19:27:04.431161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.505 [2024-02-14 19:27:04.431220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:27.505 [2024-02-14 19:27:04.431258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2207.742 ms 00:26:27.505 [2024-02-14 19:27:04.431269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.505 [2024-02-14 19:27:04.431372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.505 [2024-02-14 19:27:04.431389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:27.505 [2024-02-14 19:27:04.431407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:26:27.505 [2024-02-14 19:27:04.431416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.505 [2024-02-14 19:27:04.457134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.505 [2024-02-14 19:27:04.457171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:27.505 [2024-02-14 19:27:04.457205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.659 ms 00:26:27.505 [2024-02-14 19:27:04.457216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.505 [2024-02-14 19:27:04.482561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.505 [2024-02-14 19:27:04.482605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:27.505 [2024-02-14 19:27:04.482642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.298 ms 00:26:27.505 [2024-02-14 19:27:04.482653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.505 [2024-02-14 19:27:04.483058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.505 [2024-02-14 19:27:04.483093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:27.505 [2024-02-14 19:27:04.483110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.359 ms 00:26:27.505 [2024-02-14 19:27:04.483121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.505 [2024-02-14 19:27:04.547697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.505 [2024-02-14 19:27:04.547747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:27.505 [2024-02-14 19:27:04.547784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 64.526 ms 00:26:27.505 [2024-02-14 19:27:04.547795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.505 [2024-02-14 19:27:04.574028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.505 [2024-02-14 19:27:04.574067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:27.505 [2024-02-14 19:27:04.574105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.180 ms 00:26:27.505 [2024-02-14 19:27:04.574115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.505 [2024-02-14 19:27:04.576020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.505 [2024-02-14 19:27:04.576054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:27.505 [2024-02-14 19:27:04.576090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.859 ms 00:26:27.505 [2024-02-14 19:27:04.576109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.505 [2024-02-14 19:27:04.603443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.505 [2024-02-14 19:27:04.603480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:27.505 [2024-02-14 19:27:04.603524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.285 ms 00:26:27.505 [2024-02-14 19:27:04.603535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.505 [2024-02-14 19:27:04.603586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.505 [2024-02-14 19:27:04.603618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:27.505 [2024-02-14 19:27:04.603632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:27.505 [2024-02-14 19:27:04.603642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.505 [2024-02-14 19:27:04.603751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.505 [2024-02-14 19:27:04.603771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:27.505 [2024-02-14 19:27:04.603784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:27.505 [2024-02-14 19:27:04.603793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.505 [2024-02-14 19:27:04.605174] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2507.441 ms, result 0 00:26:27.505 { 00:26:27.505 "name": "ftl", 00:26:27.505 "uuid": "764770a3-dba9-4cc1-86e9-4022bdfc4b9a" 00:26:27.505 } 00:26:27.505 19:27:04 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:27.505 [2024-02-14 19:27:04.848075] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.505 19:27:04 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:27.766 19:27:05 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:28.024 [2024-02-14 19:27:05.272594] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:28.024 19:27:05 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:28.283 [2024-02-14 19:27:05.505291] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:28.283 19:27:05 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:28.542 Fill FTL, iteration 1 00:26:28.542 19:27:05 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:28.542 19:27:05 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:28.542 19:27:05 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:28.542 19:27:05 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:28.542 19:27:05 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:28.542 19:27:05 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:28.542 19:27:05 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:28.542 19:27:05 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:28.542 19:27:05 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:28.542 19:27:05 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:28.542 19:27:05 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:28.542 19:27:05 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:28.542 19:27:05 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:28.542 19:27:05 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:28.542 19:27:05 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:28.542 19:27:05 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:28.542 19:27:05 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:28.542 19:27:05 -- ftl/common.sh@163 -- # spdk_ini_pid=79102 00:26:28.542 19:27:05 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:28.542 19:27:05 -- ftl/common.sh@165 -- # waitforlisten 79102 /var/tmp/spdk.tgt.sock 00:26:28.542 19:27:05 -- common/autotest_common.sh@817 -- # '[' -z 79102 ']' 00:26:28.542 19:27:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:28.542 19:27:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:28.542 19:27:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:28.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:28.542 19:27:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:28.542 19:27:05 -- common/autotest_common.sh@10 -- # set +x 00:26:28.542 [2024-02-14 19:27:05.903388] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:26:28.542 [2024-02-14 19:27:05.903808] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79102 ] 00:26:28.801 [2024-02-14 19:27:06.058695] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.801 [2024-02-14 19:27:06.210110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:28.801 [2024-02-14 19:27:06.210621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.737 19:27:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:29.737 19:27:06 -- common/autotest_common.sh@850 -- # return 0 00:26:29.737 19:27:06 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:29.737 ftln1 00:26:29.737 19:27:07 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:29.737 19:27:07 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:29.995 19:27:07 -- ftl/common.sh@173 -- # echo ']}' 00:26:29.995 19:27:07 -- ftl/common.sh@176 -- # killprocess 79102 00:26:29.995 19:27:07 -- common/autotest_common.sh@924 -- # '[' -z 79102 ']' 00:26:29.995 19:27:07 -- common/autotest_common.sh@928 -- # kill -0 79102 00:26:29.995 19:27:07 -- common/autotest_common.sh@929 -- # uname 00:26:29.995 19:27:07 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:26:29.995 19:27:07 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 79102 00:26:29.995 killing process with pid 79102 00:26:29.995 19:27:07 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:26:29.995 19:27:07 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:26:29.995 19:27:07 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 79102' 00:26:29.995 19:27:07 -- common/autotest_common.sh@943 -- # kill 79102 00:26:29.995 19:27:07 -- common/autotest_common.sh@948 -- # wait 79102 00:26:31.898 19:27:09 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:31.898 19:27:09 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:31.898 [2024-02-14 19:27:09.094779] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:26:31.898 [2024-02-14 19:27:09.095554] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79144 ] 00:26:31.898 [2024-02-14 19:27:09.269364] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.157 [2024-02-14 19:27:09.424804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.157 [2024-02-14 19:27:09.424883] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:37.308  Copying: 216/1024 [MB] (216 MBps) Copying: 438/1024 [MB] (222 MBps) Copying: 658/1024 [MB] (220 MBps) Copying: 881/1024 [MB] (223 MBps) Copying: 1024/1024 [MB] (average 219 MBps)[2024-02-14 19:27:14.424296] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:38.245 00:26:38.245 00:26:38.245 Calculate MD5 checksum, iteration 1 00:26:38.245 19:27:15 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:38.245 19:27:15 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:38.245 19:27:15 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:38.245 19:27:15 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:38.245 19:27:15 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:38.245 19:27:15 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:38.245 19:27:15 -- ftl/common.sh@154 -- # return 0 00:26:38.245 19:27:15 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:38.245 [2024-02-14 19:27:15.424769] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:26:38.245 [2024-02-14 19:27:15.425170] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79216 ] 00:26:38.245 [2024-02-14 19:27:15.589538] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.504 [2024-02-14 19:27:15.744365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.504 [2024-02-14 19:27:15.744475] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:41.080  Copying: 473/1024 [MB] (473 MBps) Copying: 932/1024 [MB] (459 MBps) Copying: 1024/1024 [MB] (average 468 MBps)[2024-02-14 19:27:18.275728] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:42.017 00:26:42.017 00:26:42.017 19:27:19 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:42.017 19:27:19 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:43.923 19:27:20 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:43.923 Fill FTL, iteration 2 00:26:43.923 19:27:20 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=9fa72f33aab95632ebeaa1ec776462cc 00:26:43.923 19:27:20 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:43.923 19:27:20 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:43.923 19:27:20 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:43.923 19:27:20 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:43.923 19:27:20 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:43.923 19:27:20 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:43.923 19:27:20 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:43.923 19:27:20 -- ftl/common.sh@154 -- # return 0 00:26:43.923 19:27:20 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:43.923 [2024-02-14 19:27:21.041356] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:26:43.923 [2024-02-14 19:27:21.041527] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79280 ] 00:26:43.923 [2024-02-14 19:27:21.212283] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.182 [2024-02-14 19:27:21.412378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.182 [2024-02-14 19:27:21.412470] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:49.305  Copying: 222/1024 [MB] (222 MBps) Copying: 445/1024 [MB] (223 MBps) Copying: 663/1024 [MB] (218 MBps) Copying: 885/1024 [MB] (222 MBps) Copying: 1024/1024 [MB] (average 221 MBps)[2024-02-14 19:27:26.378341] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:49.873 00:26:49.873 00:26:50.132 Calculate MD5 checksum, iteration 2 00:26:50.132 19:27:27 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:50.132 19:27:27 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:50.132 19:27:27 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:50.132 19:27:27 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:50.132 19:27:27 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:50.132 19:27:27 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:50.132 19:27:27 -- ftl/common.sh@154 -- # return 0 00:26:50.132 19:27:27 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:50.132 [2024-02-14 19:27:27.393113] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:26:50.132 [2024-02-14 19:27:27.393268] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79344 ] 00:26:50.391 [2024-02-14 19:27:27.559720] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.391 [2024-02-14 19:27:27.704309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.391 [2024-02-14 19:27:27.704636] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:53.382  Copying: 486/1024 [MB] (486 MBps) Copying: 963/1024 [MB] (477 MBps) Copying: 1024/1024 [MB] (average 481 MBps)[2024-02-14 19:27:30.765405] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:54.317 00:26:54.317 00:26:54.317 19:27:31 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:54.317 19:27:31 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:56.217 19:27:33 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:56.217 19:27:33 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=181527e64e6b0c23f85dd854befa5a6d 00:26:56.217 19:27:33 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:56.217 19:27:33 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:56.217 19:27:33 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:56.475 [2024-02-14 19:27:33.712967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.475 [2024-02-14 19:27:33.713015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:56.476 [2024-02-14 19:27:33.713033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:56.476 [2024-02-14 19:27:33.713042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.476 [2024-02-14 19:27:33.713071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.476 [2024-02-14 19:27:33.713084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:56.476 [2024-02-14 19:27:33.713093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:56.476 [2024-02-14 19:27:33.713106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.476 [2024-02-14 19:27:33.713129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.476 [2024-02-14 19:27:33.713141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:56.476 [2024-02-14 19:27:33.713150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:56.476 [2024-02-14 19:27:33.713168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.476 [2024-02-14 19:27:33.713244] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.253 ms, result 0 00:26:56.476 true 00:26:56.476 19:27:33 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:56.734 { 00:26:56.734 "name": "ftl", 00:26:56.734 "properties": [ 00:26:56.734 { 00:26:56.734 "name": "superblock_version", 00:26:56.734 "value": 5, 00:26:56.734 "read-only": true 00:26:56.734 }, 00:26:56.734 { 00:26:56.735 "name": "base_device", 00:26:56.735 "bands": [ 00:26:56.735 { 00:26:56.735 "id": 0, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 1, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 2, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 3, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 4, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 5, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 6, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 7, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 8, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 9, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 10, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 11, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 12, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 13, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 14, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 15, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 16, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 17, 00:26:56.735 "state": "FREE", 00:26:56.735 "validity": 0.0 00:26:56.735 } 00:26:56.735 ], 00:26:56.735 "read-only": true 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "name": "cache_device", 00:26:56.735 "type": "bdev", 00:26:56.735 "chunks": [ 00:26:56.735 { 00:26:56.735 "id": 0, 00:26:56.735 "state": "CLOSED", 00:26:56.735 "utilization": 1.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 1, 00:26:56.735 "state": "CLOSED", 00:26:56.735 "utilization": 1.0 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 2, 00:26:56.735 "state": "OPEN", 00:26:56.735 "utilization": 0.001953125 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "id": 3, 00:26:56.735 "state": "OPEN", 00:26:56.735 "utilization": 0.0 00:26:56.735 } 00:26:56.735 ], 00:26:56.735 "read-only": true 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "name": "verbose_mode", 00:26:56.735 "value": true, 00:26:56.735 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:56.735 }, 00:26:56.735 { 00:26:56.735 "name": "prep_upgrade_on_shutdown", 00:26:56.735 "value": false, 00:26:56.735 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:56.735 } 00:26:56.735 ] 00:26:56.735 } 00:26:56.735 19:27:33 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:56.735 [2024-02-14 19:27:34.093082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.735 [2024-02-14 19:27:34.093267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:56.735 [2024-02-14 19:27:34.093382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:56.735 [2024-02-14 19:27:34.093427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.735 [2024-02-14 19:27:34.093589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.735 [2024-02-14 19:27:34.093640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:56.735 [2024-02-14 19:27:34.093675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:56.735 [2024-02-14 19:27:34.093794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.735 [2024-02-14 19:27:34.093954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.735 [2024-02-14 19:27:34.094006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:56.735 [2024-02-14 19:27:34.094109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:56.735 [2024-02-14 19:27:34.094153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.735 [2024-02-14 19:27:34.094251] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.146 ms, result 0 00:26:56.735 true 00:26:56.735 19:27:34 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:56.735 19:27:34 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:56.735 19:27:34 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:56.994 19:27:34 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:56.994 19:27:34 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:56.994 19:27:34 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:57.253 [2024-02-14 19:27:34.536292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.253 [2024-02-14 19:27:34.536336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:57.253 [2024-02-14 19:27:34.536368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:57.253 [2024-02-14 19:27:34.536377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.253 [2024-02-14 19:27:34.536406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.253 [2024-02-14 19:27:34.536419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:57.253 [2024-02-14 19:27:34.536429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:57.253 [2024-02-14 19:27:34.536438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.253 [2024-02-14 19:27:34.536461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.253 [2024-02-14 19:27:34.536471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:57.253 [2024-02-14 19:27:34.536481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:57.253 [2024-02-14 19:27:34.536489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.253 [2024-02-14 19:27:34.536584] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.276 ms, result 0 00:26:57.253 true 00:26:57.253 19:27:34 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:57.512 { 00:26:57.512 "name": "ftl", 00:26:57.512 "properties": [ 00:26:57.512 { 00:26:57.512 "name": "superblock_version", 00:26:57.512 "value": 5, 00:26:57.512 "read-only": true 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "name": "base_device", 00:26:57.512 "bands": [ 00:26:57.512 { 00:26:57.512 "id": 0, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 1, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 2, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 3, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 4, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 5, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 6, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 7, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 8, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 9, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 10, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 11, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 12, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 13, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 14, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 15, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 16, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 17, 00:26:57.512 "state": "FREE", 00:26:57.512 "validity": 0.0 00:26:57.512 } 00:26:57.512 ], 00:26:57.512 "read-only": true 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "name": "cache_device", 00:26:57.512 "type": "bdev", 00:26:57.512 "chunks": [ 00:26:57.512 { 00:26:57.512 "id": 0, 00:26:57.512 "state": "CLOSED", 00:26:57.512 "utilization": 1.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 1, 00:26:57.512 "state": "CLOSED", 00:26:57.512 "utilization": 1.0 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 2, 00:26:57.512 "state": "OPEN", 00:26:57.512 "utilization": 0.001953125 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "id": 3, 00:26:57.512 "state": "OPEN", 00:26:57.512 "utilization": 0.0 00:26:57.512 } 00:26:57.512 ], 00:26:57.512 "read-only": true 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "name": "verbose_mode", 00:26:57.512 "value": true, 00:26:57.512 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:57.512 }, 00:26:57.512 { 00:26:57.512 "name": "prep_upgrade_on_shutdown", 00:26:57.512 "value": true, 00:26:57.512 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:57.512 } 00:26:57.513 ] 00:26:57.513 } 00:26:57.513 19:27:34 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:57.513 19:27:34 -- ftl/common.sh@130 -- # [[ -n 78978 ]] 00:26:57.513 19:27:34 -- ftl/common.sh@131 -- # killprocess 78978 00:26:57.513 19:27:34 -- common/autotest_common.sh@924 -- # '[' -z 78978 ']' 00:26:57.513 19:27:34 -- common/autotest_common.sh@928 -- # kill -0 78978 00:26:57.513 19:27:34 -- common/autotest_common.sh@929 -- # uname 00:26:57.513 19:27:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:26:57.513 19:27:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 78978 00:26:57.513 killing process with pid 78978 00:26:57.513 19:27:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:26:57.513 19:27:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:26:57.513 19:27:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 78978' 00:26:57.513 19:27:34 -- common/autotest_common.sh@943 -- # kill 78978 00:26:57.513 19:27:34 -- common/autotest_common.sh@948 -- # wait 78978 00:26:58.519 [2024-02-14 19:27:35.547709] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:58.520 [2024-02-14 19:27:35.561094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.520 [2024-02-14 19:27:35.561138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:58.520 [2024-02-14 19:27:35.561171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:58.520 [2024-02-14 19:27:35.561187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.520 [2024-02-14 19:27:35.561213] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:58.520 [2024-02-14 19:27:35.564325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.520 [2024-02-14 19:27:35.564354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:58.520 [2024-02-14 19:27:35.564382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.094 ms 00:26:58.520 [2024-02-14 19:27:35.564392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.638 [2024-02-14 19:27:43.595848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.638 [2024-02-14 19:27:43.595902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:06.638 [2024-02-14 19:27:43.595920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8031.486 ms 00:27:06.638 [2024-02-14 19:27:43.595930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.638 [2024-02-14 19:27:43.597051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.638 [2024-02-14 19:27:43.597089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:06.638 [2024-02-14 19:27:43.597119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.102 ms 00:27:06.638 [2024-02-14 19:27:43.597137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.638 [2024-02-14 19:27:43.598361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.638 [2024-02-14 19:27:43.598388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:06.638 [2024-02-14 19:27:43.598400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.172 ms 00:27:06.638 [2024-02-14 19:27:43.598408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.638 [2024-02-14 19:27:43.608682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.638 [2024-02-14 19:27:43.608715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:06.638 [2024-02-14 19:27:43.608746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.222 ms 00:27:06.638 [2024-02-14 19:27:43.608755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.638 [2024-02-14 19:27:43.615179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.638 [2024-02-14 19:27:43.615214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:06.638 [2024-02-14 19:27:43.615233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.388 ms 00:27:06.638 [2024-02-14 19:27:43.615243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.638 [2024-02-14 19:27:43.615324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.638 [2024-02-14 19:27:43.615341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:06.638 [2024-02-14 19:27:43.615351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:27:06.638 [2024-02-14 19:27:43.615360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.638 [2024-02-14 19:27:43.625197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.638 [2024-02-14 19:27:43.625229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:06.638 [2024-02-14 19:27:43.625241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.819 ms 00:27:06.638 [2024-02-14 19:27:43.625249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.638 [2024-02-14 19:27:43.635130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.638 [2024-02-14 19:27:43.635160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:06.638 [2024-02-14 19:27:43.635172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.849 ms 00:27:06.638 [2024-02-14 19:27:43.635180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.638 [2024-02-14 19:27:43.645268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.638 [2024-02-14 19:27:43.645299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:06.638 [2024-02-14 19:27:43.645312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.055 ms 00:27:06.638 [2024-02-14 19:27:43.645320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.638 [2024-02-14 19:27:43.655776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.638 [2024-02-14 19:27:43.655807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:06.638 [2024-02-14 19:27:43.655819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.397 ms 00:27:06.638 [2024-02-14 19:27:43.655827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.638 [2024-02-14 19:27:43.655859] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:06.638 [2024-02-14 19:27:43.655877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:06.638 [2024-02-14 19:27:43.655888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:06.638 [2024-02-14 19:27:43.655897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:06.638 [2024-02-14 19:27:43.655907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.655916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.655924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.655933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.655942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.655951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.655960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.655969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.655978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.655986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.655995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.656004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.656013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.656034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.656043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:06.638 [2024-02-14 19:27:43.656054] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:06.638 [2024-02-14 19:27:43.656063] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 764770a3-dba9-4cc1-86e9-4022bdfc4b9a 00:27:06.638 [2024-02-14 19:27:43.656088] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:06.638 [2024-02-14 19:27:43.656096] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:06.638 [2024-02-14 19:27:43.656104] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:06.638 [2024-02-14 19:27:43.656113] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:06.638 [2024-02-14 19:27:43.656121] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:06.638 [2024-02-14 19:27:43.656130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:06.638 [2024-02-14 19:27:43.656137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:06.639 [2024-02-14 19:27:43.656145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:06.639 [2024-02-14 19:27:43.656153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:06.639 [2024-02-14 19:27:43.656162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.639 [2024-02-14 19:27:43.656170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:06.639 [2024-02-14 19:27:43.656179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.304 ms 00:27:06.639 [2024-02-14 19:27:43.656188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.669281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.639 [2024-02-14 19:27:43.669313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:06.639 [2024-02-14 19:27:43.669327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.072 ms 00:27:06.639 [2024-02-14 19:27:43.669335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.669566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.639 [2024-02-14 19:27:43.669582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:06.639 [2024-02-14 19:27:43.669615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:27:06.639 [2024-02-14 19:27:43.669627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.713492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.639 [2024-02-14 19:27:43.713531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:06.639 [2024-02-14 19:27:43.713560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.639 [2024-02-14 19:27:43.713575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.713608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.639 [2024-02-14 19:27:43.713620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:06.639 [2024-02-14 19:27:43.713629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.639 [2024-02-14 19:27:43.713642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.713716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.639 [2024-02-14 19:27:43.713732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:06.639 [2024-02-14 19:27:43.713743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.639 [2024-02-14 19:27:43.713752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.713771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.639 [2024-02-14 19:27:43.713781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:06.639 [2024-02-14 19:27:43.713790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.639 [2024-02-14 19:27:43.713815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.790385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.639 [2024-02-14 19:27:43.790438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:06.639 [2024-02-14 19:27:43.790469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.639 [2024-02-14 19:27:43.790478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.826062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.639 [2024-02-14 19:27:43.826108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:06.639 [2024-02-14 19:27:43.826140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.639 [2024-02-14 19:27:43.826150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.826275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.639 [2024-02-14 19:27:43.826306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:06.639 [2024-02-14 19:27:43.826316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.639 [2024-02-14 19:27:43.826325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.826370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.639 [2024-02-14 19:27:43.826384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:06.639 [2024-02-14 19:27:43.826393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.639 [2024-02-14 19:27:43.826402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.826515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.639 [2024-02-14 19:27:43.826532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:06.639 [2024-02-14 19:27:43.826542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.639 [2024-02-14 19:27:43.826550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.826607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.639 [2024-02-14 19:27:43.826635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:06.639 [2024-02-14 19:27:43.826645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.639 [2024-02-14 19:27:43.826655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.826694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.639 [2024-02-14 19:27:43.826713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:06.639 [2024-02-14 19:27:43.826723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.639 [2024-02-14 19:27:43.826731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.826779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:06.639 [2024-02-14 19:27:43.826794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:06.639 [2024-02-14 19:27:43.826804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:06.639 [2024-02-14 19:27:43.826829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.639 [2024-02-14 19:27:43.826950] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8265.885 ms, result 0 00:27:09.930 19:27:46 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:09.930 19:27:46 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:09.930 19:27:46 -- ftl/common.sh@81 -- # local base_bdev= 00:27:09.930 19:27:46 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:09.930 19:27:46 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:09.930 19:27:46 -- ftl/common.sh@89 -- # spdk_tgt_pid=79543 00:27:09.930 19:27:46 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:09.930 19:27:46 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:09.930 19:27:46 -- ftl/common.sh@91 -- # waitforlisten 79543 00:27:09.930 19:27:46 -- common/autotest_common.sh@817 -- # '[' -z 79543 ']' 00:27:09.930 19:27:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.930 19:27:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:09.930 19:27:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.930 19:27:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:09.930 19:27:46 -- common/autotest_common.sh@10 -- # set +x 00:27:09.930 [2024-02-14 19:27:46.838620] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:27:09.930 [2024-02-14 19:27:46.838784] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79543 ] 00:27:09.930 [2024-02-14 19:27:47.004430] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.930 [2024-02-14 19:27:47.147746] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:09.930 [2024-02-14 19:27:47.147940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.930 [2024-02-14 19:27:47.147987] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:27:10.498 [2024-02-14 19:27:47.784752] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:10.498 [2024-02-14 19:27:47.784841] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:10.759 [2024-02-14 19:27:47.923384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.759 [2024-02-14 19:27:47.923426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:10.759 [2024-02-14 19:27:47.923469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:10.759 [2024-02-14 19:27:47.923480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.759 [2024-02-14 19:27:47.923581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.759 [2024-02-14 19:27:47.923602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:10.759 [2024-02-14 19:27:47.923629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:10.759 [2024-02-14 19:27:47.923639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.759 [2024-02-14 19:27:47.923669] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:10.759 [2024-02-14 19:27:47.924541] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:10.759 [2024-02-14 19:27:47.924570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.759 [2024-02-14 19:27:47.924582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:10.759 [2024-02-14 19:27:47.924593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.907 ms 00:27:10.759 [2024-02-14 19:27:47.924603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.759 [2024-02-14 19:27:47.925712] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:10.759 [2024-02-14 19:27:47.939484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.759 [2024-02-14 19:27:47.939545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:10.759 [2024-02-14 19:27:47.939578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.774 ms 00:27:10.759 [2024-02-14 19:27:47.939594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.759 [2024-02-14 19:27:47.939673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.759 [2024-02-14 19:27:47.939691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:10.759 [2024-02-14 19:27:47.939703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:10.759 [2024-02-14 19:27:47.939712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.759 [2024-02-14 19:27:47.943816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.759 [2024-02-14 19:27:47.943851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:10.759 [2024-02-14 19:27:47.943887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.017 ms 00:27:10.760 [2024-02-14 19:27:47.943896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.760 [2024-02-14 19:27:47.943947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.760 [2024-02-14 19:27:47.943963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:10.760 [2024-02-14 19:27:47.943974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:10.760 [2024-02-14 19:27:47.943983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.760 [2024-02-14 19:27:47.944037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.760 [2024-02-14 19:27:47.944052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:10.760 [2024-02-14 19:27:47.944062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:10.760 [2024-02-14 19:27:47.944075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.760 [2024-02-14 19:27:47.944101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:10.760 [2024-02-14 19:27:47.947779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.760 [2024-02-14 19:27:47.947812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:10.760 [2024-02-14 19:27:47.947856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.685 ms 00:27:10.760 [2024-02-14 19:27:47.947866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.760 [2024-02-14 19:27:47.947904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.760 [2024-02-14 19:27:47.947918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:10.760 [2024-02-14 19:27:47.947929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:10.760 [2024-02-14 19:27:47.947938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.760 [2024-02-14 19:27:47.947963] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:10.760 [2024-02-14 19:27:47.947988] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:10.760 [2024-02-14 19:27:47.948025] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:10.760 [2024-02-14 19:27:47.948049] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:10.760 [2024-02-14 19:27:47.948122] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:10.760 [2024-02-14 19:27:47.948135] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:10.760 [2024-02-14 19:27:47.948147] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:10.760 [2024-02-14 19:27:47.948165] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:10.760 [2024-02-14 19:27:47.948176] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:10.760 [2024-02-14 19:27:47.948186] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:10.760 [2024-02-14 19:27:47.948195] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:10.760 [2024-02-14 19:27:47.948208] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:10.760 [2024-02-14 19:27:47.948217] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:10.760 [2024-02-14 19:27:47.948227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.760 [2024-02-14 19:27:47.948236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:10.760 [2024-02-14 19:27:47.948246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.267 ms 00:27:10.760 [2024-02-14 19:27:47.948270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.760 [2024-02-14 19:27:47.948343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.760 [2024-02-14 19:27:47.948358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:10.760 [2024-02-14 19:27:47.948368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:10.760 [2024-02-14 19:27:47.948377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.760 [2024-02-14 19:27:47.948467] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:10.760 [2024-02-14 19:27:47.948485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:10.760 [2024-02-14 19:27:47.948495] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:10.760 [2024-02-14 19:27:47.948505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:10.760 [2024-02-14 19:27:47.948514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:10.760 [2024-02-14 19:27:47.948522] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:10.760 [2024-02-14 19:27:47.948568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:10.760 [2024-02-14 19:27:47.948579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:10.760 [2024-02-14 19:27:47.948590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:10.760 [2024-02-14 19:27:47.948599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:10.760 [2024-02-14 19:27:47.948607] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:10.760 [2024-02-14 19:27:47.948616] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:10.760 [2024-02-14 19:27:47.948625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:10.760 [2024-02-14 19:27:47.948634] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:10.760 [2024-02-14 19:27:47.948643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:10.760 [2024-02-14 19:27:47.948652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:10.760 [2024-02-14 19:27:47.948660] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:10.760 [2024-02-14 19:27:47.948669] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:10.760 [2024-02-14 19:27:47.948678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:10.760 [2024-02-14 19:27:47.948686] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:10.760 [2024-02-14 19:27:47.948694] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:10.760 [2024-02-14 19:27:47.948703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:10.760 [2024-02-14 19:27:47.948712] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:10.760 [2024-02-14 19:27:47.948720] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:10.760 [2024-02-14 19:27:47.948729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:10.760 [2024-02-14 19:27:47.948737] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:10.760 [2024-02-14 19:27:47.948746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:10.760 [2024-02-14 19:27:47.948756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:10.760 [2024-02-14 19:27:47.948764] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:10.760 [2024-02-14 19:27:47.948773] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:10.760 [2024-02-14 19:27:47.948781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:10.760 [2024-02-14 19:27:47.948789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:10.760 [2024-02-14 19:27:47.948798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:10.760 [2024-02-14 19:27:47.948806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:10.760 [2024-02-14 19:27:47.948815] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:10.760 [2024-02-14 19:27:47.948823] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:10.760 [2024-02-14 19:27:47.948831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:10.760 [2024-02-14 19:27:47.948840] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:10.760 [2024-02-14 19:27:47.948849] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:10.760 [2024-02-14 19:27:47.948857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:10.760 [2024-02-14 19:27:47.948865] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:10.760 [2024-02-14 19:27:47.948875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:10.760 [2024-02-14 19:27:47.948884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:10.760 [2024-02-14 19:27:47.948893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:10.760 [2024-02-14 19:27:47.948903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:10.760 [2024-02-14 19:27:47.948930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:10.760 [2024-02-14 19:27:47.948938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:10.760 [2024-02-14 19:27:47.948947] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:10.760 [2024-02-14 19:27:47.948955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:10.760 [2024-02-14 19:27:47.948963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:10.760 [2024-02-14 19:27:47.948973] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:10.760 [2024-02-14 19:27:47.948985] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:10.760 [2024-02-14 19:27:47.949001] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:10.760 [2024-02-14 19:27:47.949010] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:10.760 [2024-02-14 19:27:47.949020] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:10.761 [2024-02-14 19:27:47.949029] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:10.761 [2024-02-14 19:27:47.949038] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:10.761 [2024-02-14 19:27:47.949047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:10.761 [2024-02-14 19:27:47.949057] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:10.761 [2024-02-14 19:27:47.949077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:10.761 [2024-02-14 19:27:47.949086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:10.761 [2024-02-14 19:27:47.949096] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:10.761 [2024-02-14 19:27:47.949105] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:10.761 [2024-02-14 19:27:47.949115] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:10.761 [2024-02-14 19:27:47.949125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:10.761 [2024-02-14 19:27:47.949134] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:10.761 [2024-02-14 19:27:47.949145] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:10.761 [2024-02-14 19:27:47.949155] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:10.761 [2024-02-14 19:27:47.949164] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:10.761 [2024-02-14 19:27:47.949174] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:10.761 [2024-02-14 19:27:47.949183] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:10.761 [2024-02-14 19:27:47.949194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:47.949204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:10.761 [2024-02-14 19:27:47.949213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.764 ms 00:27:10.761 [2024-02-14 19:27:47.949222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:47.964124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:47.964277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:10.761 [2024-02-14 19:27:47.964401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.848 ms 00:27:10.761 [2024-02-14 19:27:47.964456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:47.964631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:47.964753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:10.761 [2024-02-14 19:27:47.964867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:10.761 [2024-02-14 19:27:47.964984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:47.995665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:47.995837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:10.761 [2024-02-14 19:27:47.995958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.573 ms 00:27:10.761 [2024-02-14 19:27:47.996005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:47.996138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:47.996187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:10.761 [2024-02-14 19:27:47.996223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:10.761 [2024-02-14 19:27:47.996316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:47.996730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:47.996919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:10.761 [2024-02-14 19:27:47.997032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:27:10.761 [2024-02-14 19:27:47.997075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:47.997205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:47.997302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:10.761 [2024-02-14 19:27:47.997394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:10.761 [2024-02-14 19:27:47.997528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.012584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.012745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:10.761 [2024-02-14 19:27:48.012786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.984 ms 00:27:10.761 [2024-02-14 19:27:48.012798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.025779] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:10.761 [2024-02-14 19:27:48.025816] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:10.761 [2024-02-14 19:27:48.025847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.025857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:10.761 [2024-02-14 19:27:48.025868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.910 ms 00:27:10.761 [2024-02-14 19:27:48.025877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.041185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.041220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:10.761 [2024-02-14 19:27:48.041234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.230 ms 00:27:10.761 [2024-02-14 19:27:48.041243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.053325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.053359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:10.761 [2024-02-14 19:27:48.053372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.034 ms 00:27:10.761 [2024-02-14 19:27:48.053381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.065679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.065852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:10.761 [2024-02-14 19:27:48.065891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.259 ms 00:27:10.761 [2024-02-14 19:27:48.065927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.066410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.066434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:10.761 [2024-02-14 19:27:48.066446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.367 ms 00:27:10.761 [2024-02-14 19:27:48.066455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.125079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.125134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:10.761 [2024-02-14 19:27:48.125151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 58.595 ms 00:27:10.761 [2024-02-14 19:27:48.125161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.135052] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:10.761 [2024-02-14 19:27:48.135651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.135680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:10.761 [2024-02-14 19:27:48.135700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.439 ms 00:27:10.761 [2024-02-14 19:27:48.135711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.135791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.135809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:10.761 [2024-02-14 19:27:48.135820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:10.761 [2024-02-14 19:27:48.135830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.135895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.135911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:10.761 [2024-02-14 19:27:48.135951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:10.761 [2024-02-14 19:27:48.135966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.137674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.137704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:10.761 [2024-02-14 19:27:48.137733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.686 ms 00:27:10.761 [2024-02-14 19:27:48.137742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.137777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.137790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:10.761 [2024-02-14 19:27:48.137800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:10.761 [2024-02-14 19:27:48.137809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.137848] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:10.761 [2024-02-14 19:27:48.137864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.137874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:10.761 [2024-02-14 19:27:48.137883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:10.761 [2024-02-14 19:27:48.137892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.761 [2024-02-14 19:27:48.163629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.761 [2024-02-14 19:27:48.163666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:10.761 [2024-02-14 19:27:48.163698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.673 ms 00:27:10.762 [2024-02-14 19:27:48.163715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.762 [2024-02-14 19:27:48.163789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.762 [2024-02-14 19:27:48.163806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:10.762 [2024-02-14 19:27:48.163817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:10.762 [2024-02-14 19:27:48.163827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.762 [2024-02-14 19:27:48.165139] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 241.185 ms, result 0 00:27:11.021 [2024-02-14 19:27:48.179992] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.021 [2024-02-14 19:27:48.195991] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:11.021 [2024-02-14 19:27:48.204100] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:11.021 19:27:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:11.021 19:27:48 -- common/autotest_common.sh@850 -- # return 0 00:27:11.021 19:27:48 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:11.021 19:27:48 -- ftl/common.sh@95 -- # return 0 00:27:11.021 19:27:48 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:11.279 [2024-02-14 19:27:48.497096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.279 [2024-02-14 19:27:48.497137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:11.279 [2024-02-14 19:27:48.497154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:11.279 [2024-02-14 19:27:48.497163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.279 [2024-02-14 19:27:48.497191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.279 [2024-02-14 19:27:48.497203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:11.279 [2024-02-14 19:27:48.497213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:11.279 [2024-02-14 19:27:48.497222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.279 [2024-02-14 19:27:48.497249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.279 [2024-02-14 19:27:48.497261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:11.279 [2024-02-14 19:27:48.497270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:11.279 [2024-02-14 19:27:48.497279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.279 [2024-02-14 19:27:48.497338] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.231 ms, result 0 00:27:11.279 true 00:27:11.279 19:27:48 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:11.279 { 00:27:11.279 "name": "ftl", 00:27:11.279 "properties": [ 00:27:11.279 { 00:27:11.279 "name": "superblock_version", 00:27:11.279 "value": 5, 00:27:11.279 "read-only": true 00:27:11.279 }, 00:27:11.279 { 00:27:11.279 "name": "base_device", 00:27:11.279 "bands": [ 00:27:11.279 { 00:27:11.279 "id": 0, 00:27:11.280 "state": "CLOSED", 00:27:11.280 "validity": 1.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 1, 00:27:11.280 "state": "CLOSED", 00:27:11.280 "validity": 1.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 2, 00:27:11.280 "state": "CLOSED", 00:27:11.280 "validity": 0.007843137254901933 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 3, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 4, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 5, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 6, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 7, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 8, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 9, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 10, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 11, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 12, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 13, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 14, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 15, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 16, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 17, 00:27:11.280 "state": "FREE", 00:27:11.280 "validity": 0.0 00:27:11.280 } 00:27:11.280 ], 00:27:11.280 "read-only": true 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "name": "cache_device", 00:27:11.280 "type": "bdev", 00:27:11.280 "chunks": [ 00:27:11.280 { 00:27:11.280 "id": 0, 00:27:11.280 "state": "OPEN", 00:27:11.280 "utilization": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 1, 00:27:11.280 "state": "OPEN", 00:27:11.280 "utilization": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 2, 00:27:11.280 "state": "FREE", 00:27:11.280 "utilization": 0.0 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "id": 3, 00:27:11.280 "state": "FREE", 00:27:11.280 "utilization": 0.0 00:27:11.280 } 00:27:11.280 ], 00:27:11.280 "read-only": true 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "name": "verbose_mode", 00:27:11.280 "value": true, 00:27:11.280 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:11.280 }, 00:27:11.280 { 00:27:11.280 "name": "prep_upgrade_on_shutdown", 00:27:11.280 "value": false, 00:27:11.280 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:11.280 } 00:27:11.280 ] 00:27:11.280 } 00:27:11.280 19:27:48 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:11.280 19:27:48 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:11.280 19:27:48 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:11.539 19:27:48 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:11.539 19:27:48 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:11.539 19:27:48 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:11.539 19:27:48 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:11.539 19:27:48 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:11.797 19:27:49 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:11.797 19:27:49 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:11.797 19:27:49 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:11.797 19:27:49 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:11.797 19:27:49 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:11.797 19:27:49 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:11.797 Validate MD5 checksum, iteration 1 00:27:11.797 19:27:49 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:11.798 19:27:49 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:11.798 19:27:49 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:11.798 19:27:49 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:11.798 19:27:49 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:11.798 19:27:49 -- ftl/common.sh@154 -- # return 0 00:27:11.798 19:27:49 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:12.057 [2024-02-14 19:27:49.265759] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:27:12.057 [2024-02-14 19:27:49.265891] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79589 ] 00:27:12.057 [2024-02-14 19:27:49.423710] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.316 [2024-02-14 19:27:49.621531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.316 [2024-02-14 19:27:49.621625] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:27:15.356  Copying: 534/1024 [MB] (534 MBps) Copying: 1024/1024 [MB] (average 524 MBps)[2024-02-14 19:27:52.507874] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:27:16.292 00:27:16.292 00:27:16.292 19:27:53 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:16.292 19:27:53 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:18.197 19:27:55 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:18.197 Validate MD5 checksum, iteration 2 00:27:18.197 19:27:55 -- ftl/upgrade_shutdown.sh@103 -- # sum=9fa72f33aab95632ebeaa1ec776462cc 00:27:18.197 19:27:55 -- ftl/upgrade_shutdown.sh@105 -- # [[ 9fa72f33aab95632ebeaa1ec776462cc != \9\f\a\7\2\f\3\3\a\a\b\9\5\6\3\2\e\b\e\a\a\1\e\c\7\7\6\4\6\2\c\c ]] 00:27:18.197 19:27:55 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:18.197 19:27:55 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:18.197 19:27:55 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:18.197 19:27:55 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:18.197 19:27:55 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:18.197 19:27:55 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:18.197 19:27:55 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:18.197 19:27:55 -- ftl/common.sh@154 -- # return 0 00:27:18.197 19:27:55 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:18.197 [2024-02-14 19:27:55.284649] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:27:18.197 [2024-02-14 19:27:55.284774] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79656 ] 00:27:18.197 [2024-02-14 19:27:55.441168] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.456 [2024-02-14 19:27:55.638996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.456 [2024-02-14 19:27:55.639103] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:27:22.674  Copying: 527/1024 [MB] (527 MBps) Copying: 1024/1024 [MB] (average 519 MBps)[2024-02-14 19:27:59.603135] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:27:23.242 00:27:23.242 00:27:23.242 19:28:00 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:23.242 19:28:00 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:25.149 19:28:02 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:25.149 19:28:02 -- ftl/upgrade_shutdown.sh@103 -- # sum=181527e64e6b0c23f85dd854befa5a6d 00:27:25.149 19:28:02 -- ftl/upgrade_shutdown.sh@105 -- # [[ 181527e64e6b0c23f85dd854befa5a6d != \1\8\1\5\2\7\e\6\4\e\6\b\0\c\2\3\f\8\5\d\d\8\5\4\b\e\f\a\5\a\6\d ]] 00:27:25.149 19:28:02 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:25.149 19:28:02 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:25.149 19:28:02 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:25.149 19:28:02 -- ftl/common.sh@137 -- # [[ -n 79543 ]] 00:27:25.149 19:28:02 -- ftl/common.sh@138 -- # kill -9 79543 00:27:25.149 19:28:02 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:25.149 19:28:02 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:25.149 19:28:02 -- ftl/common.sh@81 -- # local base_bdev= 00:27:25.149 19:28:02 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:25.149 19:28:02 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:25.149 19:28:02 -- ftl/common.sh@89 -- # spdk_tgt_pid=79733 00:27:25.149 19:28:02 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:25.149 19:28:02 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:25.149 19:28:02 -- ftl/common.sh@91 -- # waitforlisten 79733 00:27:25.149 19:28:02 -- common/autotest_common.sh@817 -- # '[' -z 79733 ']' 00:27:25.149 19:28:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.149 19:28:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:25.149 19:28:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.149 19:28:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:25.149 19:28:02 -- common/autotest_common.sh@10 -- # set +x 00:27:25.149 [2024-02-14 19:28:02.386975] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:27:25.149 [2024-02-14 19:28:02.387269] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79733 ] 00:27:25.149 [2024-02-14 19:28:02.537464] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.149 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 816: 79543 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:25.408 [2024-02-14 19:28:02.681975] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:25.408 [2024-02-14 19:28:02.682192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.408 [2024-02-14 19:28:02.682251] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:27:25.977 [2024-02-14 19:28:03.324713] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:25.977 [2024-02-14 19:28:03.324779] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:26.237 [2024-02-14 19:28:03.463204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.237 [2024-02-14 19:28:03.463245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:26.237 [2024-02-14 19:28:03.463287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:26.237 [2024-02-14 19:28:03.463298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.237 [2024-02-14 19:28:03.463369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.237 [2024-02-14 19:28:03.463388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:26.237 [2024-02-14 19:28:03.463399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:26.237 [2024-02-14 19:28:03.463408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.237 [2024-02-14 19:28:03.463437] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:26.237 [2024-02-14 19:28:03.464348] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:26.237 [2024-02-14 19:28:03.464376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.237 [2024-02-14 19:28:03.464388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:26.237 [2024-02-14 19:28:03.464399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.945 ms 00:27:26.237 [2024-02-14 19:28:03.464408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.237 [2024-02-14 19:28:03.464808] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:26.237 [2024-02-14 19:28:03.481717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.237 [2024-02-14 19:28:03.481754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:26.237 [2024-02-14 19:28:03.481786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.910 ms 00:27:26.237 [2024-02-14 19:28:03.481795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.237 [2024-02-14 19:28:03.491583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.237 [2024-02-14 19:28:03.491616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:26.237 [2024-02-14 19:28:03.491646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:26.237 [2024-02-14 19:28:03.491659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.237 [2024-02-14 19:28:03.492048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.237 [2024-02-14 19:28:03.492078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:26.237 [2024-02-14 19:28:03.492090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 00:27:26.237 [2024-02-14 19:28:03.492100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.237 [2024-02-14 19:28:03.492142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.237 [2024-02-14 19:28:03.492157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:26.237 [2024-02-14 19:28:03.492170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:26.237 [2024-02-14 19:28:03.492179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.237 [2024-02-14 19:28:03.492211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.237 [2024-02-14 19:28:03.492225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:26.237 [2024-02-14 19:28:03.492234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:26.237 [2024-02-14 19:28:03.492243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.237 [2024-02-14 19:28:03.492267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:26.237 [2024-02-14 19:28:03.495588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.237 [2024-02-14 19:28:03.495619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:26.237 [2024-02-14 19:28:03.495648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.327 ms 00:27:26.237 [2024-02-14 19:28:03.495657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.237 [2024-02-14 19:28:03.495695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.237 [2024-02-14 19:28:03.495711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:26.237 [2024-02-14 19:28:03.495722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:26.237 [2024-02-14 19:28:03.495731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.237 [2024-02-14 19:28:03.495768] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:26.237 [2024-02-14 19:28:03.495803] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:26.237 [2024-02-14 19:28:03.495837] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:26.237 [2024-02-14 19:28:03.495863] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:26.237 [2024-02-14 19:28:03.495935] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:26.237 [2024-02-14 19:28:03.495947] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:26.237 [2024-02-14 19:28:03.495960] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:26.237 [2024-02-14 19:28:03.495972] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:26.237 [2024-02-14 19:28:03.495982] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:26.237 [2024-02-14 19:28:03.495992] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:26.237 [2024-02-14 19:28:03.496001] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:26.237 [2024-02-14 19:28:03.496010] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:26.237 [2024-02-14 19:28:03.496019] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:26.237 [2024-02-14 19:28:03.496029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.237 [2024-02-14 19:28:03.496039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:26.237 [2024-02-14 19:28:03.496052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:27:26.237 [2024-02-14 19:28:03.496061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.237 [2024-02-14 19:28:03.496119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.237 [2024-02-14 19:28:03.496131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:26.237 [2024-02-14 19:28:03.496141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:26.237 [2024-02-14 19:28:03.496150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.238 [2024-02-14 19:28:03.496227] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:26.238 [2024-02-14 19:28:03.496244] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:26.238 [2024-02-14 19:28:03.496254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:26.238 [2024-02-14 19:28:03.496267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.238 [2024-02-14 19:28:03.496277] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:26.238 [2024-02-14 19:28:03.496285] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:26.238 [2024-02-14 19:28:03.496294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:26.238 [2024-02-14 19:28:03.496303] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:26.238 [2024-02-14 19:28:03.496312] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:26.238 [2024-02-14 19:28:03.496321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.238 [2024-02-14 19:28:03.496329] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:26.238 [2024-02-14 19:28:03.496338] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:26.238 [2024-02-14 19:28:03.496348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.238 [2024-02-14 19:28:03.496356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:26.238 [2024-02-14 19:28:03.496365] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:26.238 [2024-02-14 19:28:03.496373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.238 [2024-02-14 19:28:03.496382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:26.238 [2024-02-14 19:28:03.496390] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:26.238 [2024-02-14 19:28:03.496399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.238 [2024-02-14 19:28:03.496407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:26.238 [2024-02-14 19:28:03.496416] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:26.238 [2024-02-14 19:28:03.496424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:26.238 [2024-02-14 19:28:03.496433] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:26.238 [2024-02-14 19:28:03.496441] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:26.238 [2024-02-14 19:28:03.496449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:26.238 [2024-02-14 19:28:03.496458] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:26.238 [2024-02-14 19:28:03.496466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:26.238 [2024-02-14 19:28:03.496475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:26.238 [2024-02-14 19:28:03.496516] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:26.238 [2024-02-14 19:28:03.496546] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:26.238 [2024-02-14 19:28:03.496555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:26.238 [2024-02-14 19:28:03.496564] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:26.238 [2024-02-14 19:28:03.496573] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:26.238 [2024-02-14 19:28:03.496582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:26.238 [2024-02-14 19:28:03.496606] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:26.238 [2024-02-14 19:28:03.496616] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:26.238 [2024-02-14 19:28:03.496625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.238 [2024-02-14 19:28:03.496634] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:26.238 [2024-02-14 19:28:03.496643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:26.238 [2024-02-14 19:28:03.496653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.238 [2024-02-14 19:28:03.496662] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:26.238 [2024-02-14 19:28:03.496672] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:26.238 [2024-02-14 19:28:03.496686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:26.238 [2024-02-14 19:28:03.496696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.238 [2024-02-14 19:28:03.496706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:26.238 [2024-02-14 19:28:03.496716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:26.238 [2024-02-14 19:28:03.496725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:26.238 [2024-02-14 19:28:03.496734] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:26.238 [2024-02-14 19:28:03.496743] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:26.238 [2024-02-14 19:28:03.496752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:26.238 [2024-02-14 19:28:03.496763] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:26.238 [2024-02-14 19:28:03.496775] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:26.238 [2024-02-14 19:28:03.496786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:26.238 [2024-02-14 19:28:03.496797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:26.238 [2024-02-14 19:28:03.496807] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:26.238 [2024-02-14 19:28:03.496817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:26.238 [2024-02-14 19:28:03.496827] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:26.238 [2024-02-14 19:28:03.496848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:26.238 [2024-02-14 19:28:03.496859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:26.238 [2024-02-14 19:28:03.496869] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:26.238 [2024-02-14 19:28:03.496894] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:26.238 [2024-02-14 19:28:03.496904] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:26.238 [2024-02-14 19:28:03.496934] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:26.238 [2024-02-14 19:28:03.496945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:26.238 [2024-02-14 19:28:03.496956] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:26.238 [2024-02-14 19:28:03.496966] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:26.238 [2024-02-14 19:28:03.496977] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:26.238 [2024-02-14 19:28:03.496988] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:26.238 [2024-02-14 19:28:03.496998] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:26.238 [2024-02-14 19:28:03.497009] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:26.238 [2024-02-14 19:28:03.497019] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:26.238 [2024-02-14 19:28:03.497031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.238 [2024-02-14 19:28:03.497041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:26.238 [2024-02-14 19:28:03.497058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.840 ms 00:27:26.238 [2024-02-14 19:28:03.497067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.238 [2024-02-14 19:28:03.510803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.238 [2024-02-14 19:28:03.510840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:26.238 [2024-02-14 19:28:03.510875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.681 ms 00:27:26.238 [2024-02-14 19:28:03.510884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.238 [2024-02-14 19:28:03.510925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.238 [2024-02-14 19:28:03.510939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:26.238 [2024-02-14 19:28:03.510950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:26.238 [2024-02-14 19:28:03.510959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.238 [2024-02-14 19:28:03.541784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.238 [2024-02-14 19:28:03.541827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:26.238 [2024-02-14 19:28:03.541859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.771 ms 00:27:26.238 [2024-02-14 19:28:03.541869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.238 [2024-02-14 19:28:03.541955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.238 [2024-02-14 19:28:03.541971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:26.238 [2024-02-14 19:28:03.541982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:26.238 [2024-02-14 19:28:03.541991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.238 [2024-02-14 19:28:03.542117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.238 [2024-02-14 19:28:03.542134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:26.238 [2024-02-14 19:28:03.542146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:27:26.238 [2024-02-14 19:28:03.542155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.238 [2024-02-14 19:28:03.542203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.238 [2024-02-14 19:28:03.542220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:26.238 [2024-02-14 19:28:03.542231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:26.238 [2024-02-14 19:28:03.542255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.238 [2024-02-14 19:28:03.557102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.238 [2024-02-14 19:28:03.557137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:26.239 [2024-02-14 19:28:03.557168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.822 ms 00:27:26.239 [2024-02-14 19:28:03.557177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.239 [2024-02-14 19:28:03.557298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.239 [2024-02-14 19:28:03.557315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:26.239 [2024-02-14 19:28:03.557326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:26.239 [2024-02-14 19:28:03.557336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.239 [2024-02-14 19:28:03.574196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.239 [2024-02-14 19:28:03.574246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:26.239 [2024-02-14 19:28:03.574277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.835 ms 00:27:26.239 [2024-02-14 19:28:03.574303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.239 [2024-02-14 19:28:03.584371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.239 [2024-02-14 19:28:03.584406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:26.239 [2024-02-14 19:28:03.584436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:27:26.239 [2024-02-14 19:28:03.584446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.239 [2024-02-14 19:28:03.645236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.239 [2024-02-14 19:28:03.645284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:26.239 [2024-02-14 19:28:03.645301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 60.577 ms 00:27:26.239 [2024-02-14 19:28:03.645317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.239 [2024-02-14 19:28:03.645413] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:26.239 [2024-02-14 19:28:03.645463] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:26.239 [2024-02-14 19:28:03.645537] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:26.239 [2024-02-14 19:28:03.645579] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:26.239 [2024-02-14 19:28:03.645591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.239 [2024-02-14 19:28:03.645616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:26.239 [2024-02-14 19:28:03.645633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.212 ms 00:27:26.239 [2024-02-14 19:28:03.645643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.239 [2024-02-14 19:28:03.645713] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:26.239 [2024-02-14 19:28:03.645731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.239 [2024-02-14 19:28:03.645741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:26.239 [2024-02-14 19:28:03.645752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:26.239 [2024-02-14 19:28:03.645761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.498 [2024-02-14 19:28:03.662394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.498 [2024-02-14 19:28:03.662430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:26.498 [2024-02-14 19:28:03.662444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.607 ms 00:27:26.498 [2024-02-14 19:28:03.662454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.498 [2024-02-14 19:28:03.671697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.498 [2024-02-14 19:28:03.671731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:26.498 [2024-02-14 19:28:03.671746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:26.498 [2024-02-14 19:28:03.671758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.498 [2024-02-14 19:28:03.671814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.498 [2024-02-14 19:28:03.671830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:26.498 [2024-02-14 19:28:03.671840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:26.498 [2024-02-14 19:28:03.671849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.498 [2024-02-14 19:28:03.671976] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:27.067 [2024-02-14 19:28:04.241081] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:27.067 [2024-02-14 19:28:04.241265] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:27.636 [2024-02-14 19:28:04.816165] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:27.636 [2024-02-14 19:28:04.816274] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:27.636 [2024-02-14 19:28:04.816295] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:27.636 [2024-02-14 19:28:04.816324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.636 [2024-02-14 19:28:04.816337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:27.636 [2024-02-14 19:28:04.816366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1144.461 ms 00:27:27.636 [2024-02-14 19:28:04.816377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.636 [2024-02-14 19:28:04.816453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.636 [2024-02-14 19:28:04.816468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:27.636 [2024-02-14 19:28:04.816478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:27.636 [2024-02-14 19:28:04.816488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.636 [2024-02-14 19:28:04.827947] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:27.636 [2024-02-14 19:28:04.828082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.636 [2024-02-14 19:28:04.828098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:27.636 [2024-02-14 19:28:04.828110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.558 ms 00:27:27.636 [2024-02-14 19:28:04.828119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.636 [2024-02-14 19:28:04.828861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.636 [2024-02-14 19:28:04.828922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:27.636 [2024-02-14 19:28:04.828935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.668 ms 00:27:27.636 [2024-02-14 19:28:04.828945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.636 [2024-02-14 19:28:04.831325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.636 [2024-02-14 19:28:04.831350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:27.636 [2024-02-14 19:28:04.831377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.356 ms 00:27:27.636 [2024-02-14 19:28:04.831386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.636 [2024-02-14 19:28:04.857039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.636 [2024-02-14 19:28:04.857075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:27.636 [2024-02-14 19:28:04.857106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.626 ms 00:27:27.636 [2024-02-14 19:28:04.857115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.636 [2024-02-14 19:28:04.857218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.636 [2024-02-14 19:28:04.857237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:27.636 [2024-02-14 19:28:04.857248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:27.636 [2024-02-14 19:28:04.857257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.636 [2024-02-14 19:28:04.859125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.636 [2024-02-14 19:28:04.859157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:27.636 [2024-02-14 19:28:04.859186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.846 ms 00:27:27.636 [2024-02-14 19:28:04.859196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.636 [2024-02-14 19:28:04.859231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.636 [2024-02-14 19:28:04.859245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:27.636 [2024-02-14 19:28:04.859255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:27.636 [2024-02-14 19:28:04.859264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.636 [2024-02-14 19:28:04.859315] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:27.636 [2024-02-14 19:28:04.859330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.636 [2024-02-14 19:28:04.859344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:27.636 [2024-02-14 19:28:04.859353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:27.636 [2024-02-14 19:28:04.859362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.636 [2024-02-14 19:28:04.859413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.636 [2024-02-14 19:28:04.859427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:27.636 [2024-02-14 19:28:04.859437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:27.636 [2024-02-14 19:28:04.859447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.636 [2024-02-14 19:28:04.860662] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1396.887 ms, result 0 00:27:27.636 [2024-02-14 19:28:04.873438] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.636 [2024-02-14 19:28:04.889453] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:27.636 [2024-02-14 19:28:04.897564] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:28.205 Validate MD5 checksum, iteration 1 00:27:28.205 19:28:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:28.205 19:28:05 -- common/autotest_common.sh@850 -- # return 0 00:27:28.205 19:28:05 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:28.205 19:28:05 -- ftl/common.sh@95 -- # return 0 00:27:28.205 19:28:05 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:28.205 19:28:05 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:28.205 19:28:05 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:28.205 19:28:05 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:28.205 19:28:05 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:28.205 19:28:05 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:28.205 19:28:05 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:28.205 19:28:05 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:28.205 19:28:05 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:28.205 19:28:05 -- ftl/common.sh@154 -- # return 0 00:27:28.205 19:28:05 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:28.205 [2024-02-14 19:28:05.599874] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:27:28.205 [2024-02-14 19:28:05.600305] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79773 ] 00:27:28.464 [2024-02-14 19:28:05.767138] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.724 [2024-02-14 19:28:05.915689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.724 [2024-02-14 19:28:05.915962] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:27:32.451  Copying: 504/1024 [MB] (504 MBps) Copying: 1006/1024 [MB] (502 MBps) Copying: 1024/1024 [MB] (average 502 MBps)[2024-02-14 19:28:09.570422] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:27:33.388 00:27:33.388 00:27:33.388 19:28:10 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:33.388 19:28:10 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:35.293 19:28:12 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:35.293 Validate MD5 checksum, iteration 2 00:27:35.293 19:28:12 -- ftl/upgrade_shutdown.sh@103 -- # sum=9fa72f33aab95632ebeaa1ec776462cc 00:27:35.293 19:28:12 -- ftl/upgrade_shutdown.sh@105 -- # [[ 9fa72f33aab95632ebeaa1ec776462cc != \9\f\a\7\2\f\3\3\a\a\b\9\5\6\3\2\e\b\e\a\a\1\e\c\7\7\6\4\6\2\c\c ]] 00:27:35.293 19:28:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:35.293 19:28:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:35.293 19:28:12 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:35.293 19:28:12 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:35.293 19:28:12 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:35.293 19:28:12 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:35.293 19:28:12 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:35.293 19:28:12 -- ftl/common.sh@154 -- # return 0 00:27:35.293 19:28:12 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:35.293 [2024-02-14 19:28:12.438381] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:27:35.293 [2024-02-14 19:28:12.438560] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79848 ] 00:27:35.294 [2024-02-14 19:28:12.609763] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.553 [2024-02-14 19:28:12.818609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.553 [2024-02-14 19:28:12.818705] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:27:38.306  Copying: 513/1024 [MB] (513 MBps) Copying: 1003/1024 [MB] (490 MBps) Copying: 1024/1024 [MB] (average 502 MBps)[2024-02-14 19:28:15.673249] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:27:39.240 00:27:39.240 00:27:39.240 19:28:16 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:39.240 19:28:16 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:41.143 19:28:18 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:41.143 19:28:18 -- ftl/upgrade_shutdown.sh@103 -- # sum=181527e64e6b0c23f85dd854befa5a6d 00:27:41.143 19:28:18 -- ftl/upgrade_shutdown.sh@105 -- # [[ 181527e64e6b0c23f85dd854befa5a6d != \1\8\1\5\2\7\e\6\4\e\6\b\0\c\2\3\f\8\5\d\d\8\5\4\b\e\f\a\5\a\6\d ]] 00:27:41.143 19:28:18 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:41.143 19:28:18 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:41.143 19:28:18 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:41.143 19:28:18 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:41.143 19:28:18 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:41.143 19:28:18 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:41.143 19:28:18 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:41.143 19:28:18 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:41.143 19:28:18 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:41.143 19:28:18 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:41.143 19:28:18 -- ftl/common.sh@130 -- # [[ -n 79733 ]] 00:27:41.143 19:28:18 -- ftl/common.sh@131 -- # killprocess 79733 00:27:41.143 19:28:18 -- common/autotest_common.sh@924 -- # '[' -z 79733 ']' 00:27:41.143 19:28:18 -- common/autotest_common.sh@928 -- # kill -0 79733 00:27:41.143 19:28:18 -- common/autotest_common.sh@929 -- # uname 00:27:41.143 19:28:18 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:27:41.143 19:28:18 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 79733 00:27:41.143 killing process with pid 79733 00:27:41.143 19:28:18 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:27:41.143 19:28:18 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:27:41.143 19:28:18 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 79733' 00:27:41.143 19:28:18 -- common/autotest_common.sh@943 -- # kill 79733 00:27:41.143 [2024-02-14 19:28:18.535027] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:27:41.143 19:28:18 -- common/autotest_common.sh@948 -- # wait 79733 00:27:42.079 [2024-02-14 19:28:19.265311] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:42.079 [2024-02-14 19:28:19.277867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.277908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:42.079 [2024-02-14 19:28:19.277950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:42.079 [2024-02-14 19:28:19.277962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.277992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:42.079 [2024-02-14 19:28:19.280681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.280708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:42.079 [2024-02-14 19:28:19.280721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.671 ms 00:27:42.079 [2024-02-14 19:28:19.280730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.280957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.280974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:42.079 [2024-02-14 19:28:19.280984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:27:42.079 [2024-02-14 19:28:19.280994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.282263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.282300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:42.079 [2024-02-14 19:28:19.282331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.250 ms 00:27:42.079 [2024-02-14 19:28:19.282341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.283524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.283604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:42.079 [2024-02-14 19:28:19.283620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.130 ms 00:27:42.079 [2024-02-14 19:28:19.283629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.293716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.293751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:42.079 [2024-02-14 19:28:19.293766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.050 ms 00:27:42.079 [2024-02-14 19:28:19.293776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.299430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.299471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:42.079 [2024-02-14 19:28:19.299515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.617 ms 00:27:42.079 [2024-02-14 19:28:19.299528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.299621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.299639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:42.079 [2024-02-14 19:28:19.299650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:42.079 [2024-02-14 19:28:19.299659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.309600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.309633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:42.079 [2024-02-14 19:28:19.309662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.921 ms 00:27:42.079 [2024-02-14 19:28:19.309671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.319846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.319878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:42.079 [2024-02-14 19:28:19.319892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.140 ms 00:27:42.079 [2024-02-14 19:28:19.319900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.330743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.330775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:42.079 [2024-02-14 19:28:19.330788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.809 ms 00:27:42.079 [2024-02-14 19:28:19.330797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.340618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.340649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:42.079 [2024-02-14 19:28:19.340677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.762 ms 00:27:42.079 [2024-02-14 19:28:19.340686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.340720] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:42.079 [2024-02-14 19:28:19.340739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:42.079 [2024-02-14 19:28:19.340758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:42.079 [2024-02-14 19:28:19.340768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:42.079 [2024-02-14 19:28:19.340778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:42.079 [2024-02-14 19:28:19.340936] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:42.079 [2024-02-14 19:28:19.340946] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 764770a3-dba9-4cc1-86e9-4022bdfc4b9a 00:27:42.079 [2024-02-14 19:28:19.340956] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:42.079 [2024-02-14 19:28:19.340965] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:42.079 [2024-02-14 19:28:19.340974] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:42.079 [2024-02-14 19:28:19.340983] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:42.079 [2024-02-14 19:28:19.341006] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:42.079 [2024-02-14 19:28:19.341015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:42.079 [2024-02-14 19:28:19.341024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:42.079 [2024-02-14 19:28:19.341032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:42.079 [2024-02-14 19:28:19.341041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:42.079 [2024-02-14 19:28:19.341051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.341060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:42.079 [2024-02-14 19:28:19.341070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.332 ms 00:27:42.079 [2024-02-14 19:28:19.341084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.354210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.354259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:42.079 [2024-02-14 19:28:19.354273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.104 ms 00:27:42.079 [2024-02-14 19:28:19.354283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.354469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.079 [2024-02-14 19:28:19.354524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:42.079 [2024-02-14 19:28:19.354553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.162 ms 00:27:42.079 [2024-02-14 19:28:19.354563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.399368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.079 [2024-02-14 19:28:19.399408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:42.079 [2024-02-14 19:28:19.399423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.079 [2024-02-14 19:28:19.399432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.399463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.079 [2024-02-14 19:28:19.399481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:42.079 [2024-02-14 19:28:19.399530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.079 [2024-02-14 19:28:19.399540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.399628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.079 [2024-02-14 19:28:19.399655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:42.079 [2024-02-14 19:28:19.399674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.079 [2024-02-14 19:28:19.399690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.399756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.079 [2024-02-14 19:28:19.399795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:42.079 [2024-02-14 19:28:19.399814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.079 [2024-02-14 19:28:19.399844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.079 [2024-02-14 19:28:19.477971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.079 [2024-02-14 19:28:19.478026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:42.079 [2024-02-14 19:28:19.478060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.079 [2024-02-14 19:28:19.478070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.338 [2024-02-14 19:28:19.512208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.338 [2024-02-14 19:28:19.512245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:42.338 [2024-02-14 19:28:19.512267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.338 [2024-02-14 19:28:19.512277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.338 [2024-02-14 19:28:19.512350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.338 [2024-02-14 19:28:19.512366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:42.338 [2024-02-14 19:28:19.512376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.338 [2024-02-14 19:28:19.512385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.338 [2024-02-14 19:28:19.512430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.338 [2024-02-14 19:28:19.512443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:42.338 [2024-02-14 19:28:19.512453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.338 [2024-02-14 19:28:19.512461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.338 [2024-02-14 19:28:19.512650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.338 [2024-02-14 19:28:19.512710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:42.338 [2024-02-14 19:28:19.512754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.338 [2024-02-14 19:28:19.512773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.338 [2024-02-14 19:28:19.512848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.338 [2024-02-14 19:28:19.512873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:42.338 [2024-02-14 19:28:19.512901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.338 [2024-02-14 19:28:19.512927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.338 [2024-02-14 19:28:19.512979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.338 [2024-02-14 19:28:19.513009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:42.338 [2024-02-14 19:28:19.513020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.338 [2024-02-14 19:28:19.513030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.338 [2024-02-14 19:28:19.513080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:42.338 [2024-02-14 19:28:19.513096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:42.338 [2024-02-14 19:28:19.513107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:42.338 [2024-02-14 19:28:19.513116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.338 [2024-02-14 19:28:19.513255] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 235.351 ms, result 0 00:27:43.277 19:28:20 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:43.277 19:28:20 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:43.277 19:28:20 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:43.277 19:28:20 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:43.277 19:28:20 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:43.277 19:28:20 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:43.277 Remove shared memory files 00:27:43.277 19:28:20 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:43.277 19:28:20 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:43.277 19:28:20 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:43.277 19:28:20 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:43.277 19:28:20 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79543 00:27:43.277 19:28:20 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:43.277 19:28:20 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:43.277 ************************************ 00:27:43.277 END TEST ftl_upgrade_shutdown 00:27:43.277 ************************************ 00:27:43.277 00:27:43.277 real 1m22.726s 00:27:43.277 user 1m58.391s 00:27:43.277 sys 0m21.108s 00:27:43.277 19:28:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:43.277 19:28:20 -- common/autotest_common.sh@10 -- # set +x 00:27:43.277 19:28:20 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:27:43.277 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:27:43.277 19:28:20 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:27:43.277 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:27:43.277 19:28:20 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:43.277 19:28:20 -- ftl/ftl.sh@14 -- # killprocess 71983 00:27:43.277 19:28:20 -- common/autotest_common.sh@924 -- # '[' -z 71983 ']' 00:27:43.277 19:28:20 -- common/autotest_common.sh@928 -- # kill -0 71983 00:27:43.277 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (71983) - No such process 00:27:43.277 Process with pid 71983 is not found 00:27:43.277 19:28:20 -- common/autotest_common.sh@951 -- # echo 'Process with pid 71983 is not found' 00:27:43.277 19:28:20 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:27:43.277 19:28:20 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79963 00:27:43.277 19:28:20 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:43.277 19:28:20 -- ftl/ftl.sh@20 -- # waitforlisten 79963 00:27:43.277 19:28:20 -- common/autotest_common.sh@817 -- # '[' -z 79963 ']' 00:27:43.277 19:28:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.277 19:28:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:43.277 19:28:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.277 19:28:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:43.277 19:28:20 -- common/autotest_common.sh@10 -- # set +x 00:27:43.277 [2024-02-14 19:28:20.565610] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:27:43.277 [2024-02-14 19:28:20.565772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79963 ] 00:27:43.536 [2024-02-14 19:28:20.735515] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.536 [2024-02-14 19:28:20.876743] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:43.536 [2024-02-14 19:28:20.876965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.103 19:28:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:44.103 19:28:21 -- common/autotest_common.sh@850 -- # return 0 00:27:44.103 19:28:21 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:27:44.362 nvme0n1 00:27:44.362 19:28:21 -- ftl/ftl.sh@22 -- # clear_lvols 00:27:44.362 19:28:21 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:44.362 19:28:21 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:44.620 19:28:21 -- ftl/common.sh@28 -- # stores=e278df3b-4076-40c7-8f78-972aceccf7a9 00:27:44.620 19:28:21 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:44.620 19:28:21 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e278df3b-4076-40c7-8f78-972aceccf7a9 00:27:44.879 19:28:22 -- ftl/ftl.sh@23 -- # killprocess 79963 00:27:44.879 19:28:22 -- common/autotest_common.sh@924 -- # '[' -z 79963 ']' 00:27:44.879 19:28:22 -- common/autotest_common.sh@928 -- # kill -0 79963 00:27:44.879 19:28:22 -- common/autotest_common.sh@929 -- # uname 00:27:44.879 19:28:22 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:27:44.879 19:28:22 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 79963 00:27:44.879 killing process with pid 79963 00:27:44.879 19:28:22 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:27:44.879 19:28:22 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:27:44.879 19:28:22 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 79963' 00:27:44.879 19:28:22 -- common/autotest_common.sh@943 -- # kill 79963 00:27:44.879 19:28:22 -- common/autotest_common.sh@948 -- # wait 79963 00:27:46.783 19:28:23 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:46.783 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:46.783 Waiting for block devices as requested 00:27:46.783 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.042 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.042 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.042 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:52.316 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:27:52.316 Remove shared memory files 00:27:52.316 19:28:29 -- ftl/ftl.sh@28 -- # remove_shm 00:27:52.316 19:28:29 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:52.316 19:28:29 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:52.316 19:28:29 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:52.316 19:28:29 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:52.316 19:28:29 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:52.316 19:28:29 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:52.316 ************************************ 00:27:52.316 END TEST ftl 00:27:52.316 ************************************ 00:27:52.316 00:27:52.316 real 11m45.718s 00:27:52.316 user 14m40.134s 00:27:52.316 sys 1m23.990s 00:27:52.316 19:28:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:52.316 19:28:29 -- common/autotest_common.sh@10 -- # set +x 00:27:52.316 19:28:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:52.316 19:28:29 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:52.316 19:28:29 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:52.316 19:28:29 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:52.316 19:28:29 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:52.316 19:28:29 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:52.316 19:28:29 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:52.316 19:28:29 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:27:52.316 19:28:29 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:27:52.316 19:28:29 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:27:52.316 19:28:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:52.316 19:28:29 -- common/autotest_common.sh@10 -- # set +x 00:27:52.316 19:28:29 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:27:52.316 19:28:29 -- common/autotest_common.sh@1369 -- # local autotest_es=0 00:27:52.316 19:28:29 -- common/autotest_common.sh@1370 -- # xtrace_disable 00:27:52.316 19:28:29 -- common/autotest_common.sh@10 -- # set +x 00:27:53.695 INFO: APP EXITING 00:27:53.695 INFO: killing all VMs 00:27:53.695 INFO: killing vhost app 00:27:53.695 INFO: EXIT DONE 00:27:54.264 lsblk: /dev/nvme0c0n1: not a block device 00:27:54.523 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:54.523 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:27:54.523 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:27:54.523 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:54.523 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:55.460 lsblk: /dev/nvme0c0n1: not a block device 00:27:55.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:55.460 Cleaning 00:27:55.460 Removing: /var/run/dpdk/spdk0/config 00:27:55.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:55.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:55.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:55.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:55.460 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:55.460 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:55.460 Removing: /var/run/dpdk/spdk0 00:27:55.460 Removing: /var/run/dpdk/spdk_pid56727 00:27:55.460 Removing: /var/run/dpdk/spdk_pid56936 00:27:55.460 Removing: /var/run/dpdk/spdk_pid57236 00:27:55.460 Removing: /var/run/dpdk/spdk_pid57334 00:27:55.460 Removing: /var/run/dpdk/spdk_pid57434 00:27:55.460 Removing: /var/run/dpdk/spdk_pid57549 00:27:55.460 Removing: /var/run/dpdk/spdk_pid57650 00:27:55.460 Removing: /var/run/dpdk/spdk_pid57696 00:27:55.460 Removing: /var/run/dpdk/spdk_pid57733 00:27:55.460 Removing: /var/run/dpdk/spdk_pid57794 00:27:55.460 Removing: /var/run/dpdk/spdk_pid57889 00:27:55.460 Removing: /var/run/dpdk/spdk_pid58333 00:27:55.460 Removing: /var/run/dpdk/spdk_pid58410 00:27:55.720 Removing: /var/run/dpdk/spdk_pid58488 00:27:55.720 Removing: /var/run/dpdk/spdk_pid58517 00:27:55.720 Removing: /var/run/dpdk/spdk_pid58651 00:27:55.720 Removing: /var/run/dpdk/spdk_pid58674 00:27:55.720 Removing: /var/run/dpdk/spdk_pid58809 00:27:55.720 Removing: /var/run/dpdk/spdk_pid58838 00:27:55.720 Removing: /var/run/dpdk/spdk_pid58902 00:27:55.720 Removing: /var/run/dpdk/spdk_pid58922 00:27:55.720 Removing: /var/run/dpdk/spdk_pid58986 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59017 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59194 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59231 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59314 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59397 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59428 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59506 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59532 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59573 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59605 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59651 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59677 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59718 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59750 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59795 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59822 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59869 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59895 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59936 00:27:55.720 Removing: /var/run/dpdk/spdk_pid59962 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60014 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60040 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60081 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60107 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60158 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60184 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60230 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60262 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60303 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60329 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60381 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60407 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60448 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60474 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60521 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60547 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60593 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60625 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60666 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60695 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60750 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60779 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60829 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60860 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60907 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60933 00:27:55.720 Removing: /var/run/dpdk/spdk_pid60975 00:27:55.720 Removing: /var/run/dpdk/spdk_pid61056 00:27:55.720 Removing: /var/run/dpdk/spdk_pid61171 00:27:55.720 Removing: /var/run/dpdk/spdk_pid61348 00:27:55.720 Removing: /var/run/dpdk/spdk_pid61446 00:27:55.720 Removing: /var/run/dpdk/spdk_pid61494 00:27:55.720 Removing: /var/run/dpdk/spdk_pid61962 00:27:55.720 Removing: /var/run/dpdk/spdk_pid62151 00:27:55.720 Removing: /var/run/dpdk/spdk_pid62262 00:27:55.720 Removing: /var/run/dpdk/spdk_pid62315 00:27:55.720 Removing: /var/run/dpdk/spdk_pid62346 00:27:55.720 Removing: /var/run/dpdk/spdk_pid62421 00:27:55.720 Removing: /var/run/dpdk/spdk_pid63117 00:27:55.720 Removing: /var/run/dpdk/spdk_pid63161 00:27:55.720 Removing: /var/run/dpdk/spdk_pid63668 00:27:55.720 Removing: /var/run/dpdk/spdk_pid63777 00:27:55.720 Removing: /var/run/dpdk/spdk_pid63892 00:27:55.720 Removing: /var/run/dpdk/spdk_pid63946 00:27:55.720 Removing: /var/run/dpdk/spdk_pid63976 00:27:55.720 Removing: /var/run/dpdk/spdk_pid64003 00:27:55.720 Removing: /var/run/dpdk/spdk_pid65958 00:27:55.720 Removing: /var/run/dpdk/spdk_pid66108 00:27:55.720 Removing: /var/run/dpdk/spdk_pid66118 00:27:55.720 Removing: /var/run/dpdk/spdk_pid66135 00:27:55.720 Removing: /var/run/dpdk/spdk_pid66170 00:27:55.720 Removing: /var/run/dpdk/spdk_pid66178 00:27:55.720 Removing: /var/run/dpdk/spdk_pid66191 00:27:55.720 Removing: /var/run/dpdk/spdk_pid66236 00:27:55.720 Removing: /var/run/dpdk/spdk_pid66240 00:27:55.720 Removing: /var/run/dpdk/spdk_pid66263 00:27:55.980 Removing: /var/run/dpdk/spdk_pid66302 00:27:55.980 Removing: /var/run/dpdk/spdk_pid66306 00:27:55.980 Removing: /var/run/dpdk/spdk_pid66324 00:27:55.980 Removing: /var/run/dpdk/spdk_pid67791 00:27:55.980 Removing: /var/run/dpdk/spdk_pid67898 00:27:55.980 Removing: /var/run/dpdk/spdk_pid68037 00:27:55.980 Removing: /var/run/dpdk/spdk_pid68161 00:27:55.980 Removing: /var/run/dpdk/spdk_pid68291 00:27:55.980 Removing: /var/run/dpdk/spdk_pid68406 00:27:55.980 Removing: /var/run/dpdk/spdk_pid68545 00:27:55.980 Removing: /var/run/dpdk/spdk_pid68619 00:27:55.980 Removing: /var/run/dpdk/spdk_pid68765 00:27:55.980 Removing: /var/run/dpdk/spdk_pid69156 00:27:55.980 Removing: /var/run/dpdk/spdk_pid69198 00:27:55.980 Removing: /var/run/dpdk/spdk_pid69657 00:27:55.980 Removing: /var/run/dpdk/spdk_pid69844 00:27:55.980 Removing: /var/run/dpdk/spdk_pid69945 00:27:55.980 Removing: /var/run/dpdk/spdk_pid70055 00:27:55.980 Removing: /var/run/dpdk/spdk_pid70114 00:27:55.980 Removing: /var/run/dpdk/spdk_pid70144 00:27:55.980 Removing: /var/run/dpdk/spdk_pid70498 00:27:55.980 Removing: /var/run/dpdk/spdk_pid70566 00:27:55.980 Removing: /var/run/dpdk/spdk_pid70647 00:27:55.980 Removing: /var/run/dpdk/spdk_pid71038 00:27:55.980 Removing: /var/run/dpdk/spdk_pid71192 00:27:55.980 Removing: /var/run/dpdk/spdk_pid71983 00:27:55.980 Removing: /var/run/dpdk/spdk_pid72117 00:27:55.980 Removing: /var/run/dpdk/spdk_pid72343 00:27:55.980 Removing: /var/run/dpdk/spdk_pid72440 00:27:55.980 Removing: /var/run/dpdk/spdk_pid72803 00:27:55.980 Removing: /var/run/dpdk/spdk_pid73061 00:27:55.980 Removing: /var/run/dpdk/spdk_pid73408 00:27:55.980 Removing: /var/run/dpdk/spdk_pid73626 00:27:55.980 Removing: /var/run/dpdk/spdk_pid73773 00:27:55.980 Removing: /var/run/dpdk/spdk_pid73839 00:27:55.980 Removing: /var/run/dpdk/spdk_pid73993 00:27:55.980 Removing: /var/run/dpdk/spdk_pid74024 00:27:55.980 Removing: /var/run/dpdk/spdk_pid74091 00:27:55.980 Removing: /var/run/dpdk/spdk_pid74302 00:27:55.980 Removing: /var/run/dpdk/spdk_pid74563 00:27:55.980 Removing: /var/run/dpdk/spdk_pid75011 00:27:55.980 Removing: /var/run/dpdk/spdk_pid75493 00:27:55.980 Removing: /var/run/dpdk/spdk_pid75959 00:27:55.980 Removing: /var/run/dpdk/spdk_pid76507 00:27:55.980 Removing: /var/run/dpdk/spdk_pid76647 00:27:55.980 Removing: /var/run/dpdk/spdk_pid76734 00:27:55.980 Removing: /var/run/dpdk/spdk_pid77442 00:27:55.980 Removing: /var/run/dpdk/spdk_pid77508 00:27:55.980 Removing: /var/run/dpdk/spdk_pid77996 00:27:55.980 Removing: /var/run/dpdk/spdk_pid78424 00:27:55.980 Removing: /var/run/dpdk/spdk_pid78978 00:27:55.980 Removing: /var/run/dpdk/spdk_pid79102 00:27:55.980 Removing: /var/run/dpdk/spdk_pid79144 00:27:55.980 Removing: /var/run/dpdk/spdk_pid79216 00:27:55.980 Removing: /var/run/dpdk/spdk_pid79280 00:27:55.980 Removing: /var/run/dpdk/spdk_pid79344 00:27:55.980 Removing: /var/run/dpdk/spdk_pid79543 00:27:55.980 Removing: /var/run/dpdk/spdk_pid79589 00:27:55.980 Removing: /var/run/dpdk/spdk_pid79656 00:27:55.980 Removing: /var/run/dpdk/spdk_pid79733 00:27:55.980 Removing: /var/run/dpdk/spdk_pid79773 00:27:55.980 Removing: /var/run/dpdk/spdk_pid79848 00:27:55.980 Removing: /var/run/dpdk/spdk_pid79963 00:27:55.980 Clean 00:27:56.239 killing process with pid 48360 00:27:56.239 killing process with pid 48361 00:27:56.239 19:28:33 -- common/autotest_common.sh@1434 -- # return 0 00:27:56.239 19:28:33 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:27:56.239 19:28:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:56.239 19:28:33 -- common/autotest_common.sh@10 -- # set +x 00:27:56.239 19:28:33 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:27:56.239 19:28:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:56.239 19:28:33 -- common/autotest_common.sh@10 -- # set +x 00:27:56.239 19:28:33 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:56.239 19:28:33 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:56.239 19:28:33 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:56.239 19:28:33 -- spdk/autotest.sh@394 -- # hash lcov 00:27:56.239 19:28:33 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:56.239 19:28:33 -- spdk/autotest.sh@396 -- # hostname 00:27:56.239 19:28:33 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:56.498 geninfo: WARNING: invalid characters removed from testname! 00:28:18.460 19:28:53 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:19.398 19:28:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:21.932 19:28:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:23.837 19:29:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:25.742 19:29:03 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:28.273 19:29:05 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:30.177 19:29:07 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:30.177 19:29:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:30.177 19:29:07 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:30.177 19:29:07 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.177 19:29:07 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.177 19:29:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.177 19:29:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.177 19:29:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.177 19:29:07 -- paths/export.sh@5 -- $ export PATH 00:28:30.177 19:29:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.177 19:29:07 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:30.177 19:29:07 -- common/autobuild_common.sh@435 -- $ date +%s 00:28:30.177 19:29:07 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1707938947.XXXXXX 00:28:30.177 19:29:07 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1707938947.XQVckw 00:28:30.177 19:29:07 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:28:30.177 19:29:07 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:28:30.177 19:29:07 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:30.177 19:29:07 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:30.177 19:29:07 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:30.177 19:29:07 -- common/autobuild_common.sh@451 -- $ get_config_params 00:28:30.177 19:29:07 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:28:30.177 19:29:07 -- common/autotest_common.sh@10 -- $ set +x 00:28:30.177 19:29:07 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:30.177 19:29:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:30.177 19:29:07 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:30.177 19:29:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:30.177 19:29:07 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:30.177 19:29:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:30.177 19:29:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:30.177 19:29:07 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:30.177 19:29:07 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:30.177 19:29:07 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:30.436 19:29:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:30.436 + [[ -n 5166 ]] 00:28:30.436 + sudo kill 5166 00:28:30.446 [Pipeline] } 00:28:30.466 [Pipeline] // timeout 00:28:30.471 [Pipeline] } 00:28:30.490 [Pipeline] // stage 00:28:30.495 [Pipeline] } 00:28:30.512 [Pipeline] // catchError 00:28:30.521 [Pipeline] stage 00:28:30.523 [Pipeline] { (Stop VM) 00:28:30.537 [Pipeline] sh 00:28:30.817 + vagrant halt 00:28:33.352 ==> default: Halting domain... 00:28:39.934 [Pipeline] sh 00:28:40.211 + vagrant destroy -f 00:28:42.744 ==> default: Removing domain... 00:28:43.322 [Pipeline] sh 00:28:43.645 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:28:43.664 [Pipeline] } 00:28:43.681 [Pipeline] // stage 00:28:43.686 [Pipeline] } 00:28:43.703 [Pipeline] // dir 00:28:43.708 [Pipeline] } 00:28:43.725 [Pipeline] // wrap 00:28:43.731 [Pipeline] } 00:28:43.747 [Pipeline] // catchError 00:28:43.756 [Pipeline] stage 00:28:43.758 [Pipeline] { (Epilogue) 00:28:43.772 [Pipeline] sh 00:28:44.054 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:49.339 [Pipeline] catchError 00:28:49.341 [Pipeline] { 00:28:49.356 [Pipeline] sh 00:28:49.638 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:49.638 Artifacts sizes are good 00:28:49.649 [Pipeline] } 00:28:49.667 [Pipeline] // catchError 00:28:49.679 [Pipeline] archiveArtifacts 00:28:49.687 Archiving artifacts 00:28:49.825 [Pipeline] cleanWs 00:28:49.836 [WS-CLEANUP] Deleting project workspace... 00:28:49.836 [WS-CLEANUP] Deferred wipeout is used... 00:28:49.842 [WS-CLEANUP] done 00:28:49.844 [Pipeline] } 00:28:49.864 [Pipeline] // stage 00:28:49.869 [Pipeline] } 00:28:49.886 [Pipeline] // node 00:28:49.891 [Pipeline] End of Pipeline 00:28:49.939 Finished: SUCCESS