00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2005 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3266 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.126 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.127 The recommended git tool is: git 00:00:00.127 using credential 00000000-0000-0000-0000-000000000002 00:00:00.128 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.167 Fetching changes from the remote Git repository 00:00:00.169 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.207 Using shallow fetch with depth 1 00:00:00.207 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.207 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.253 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.253 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.021 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.031 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.041 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:08.041 > git config core.sparsecheckout # timeout=10 00:00:08.052 > git read-tree -mu HEAD # timeout=10 00:00:08.067 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:08.082 Commit message: "inventory: add WCP3 to free inventory" 00:00:08.082 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:08.194 [Pipeline] Start of Pipeline 00:00:08.210 [Pipeline] library 00:00:08.212 Loading library shm_lib@master 00:00:08.212 Library shm_lib@master is cached. Copying from home. 00:00:08.227 [Pipeline] node 00:00:08.236 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.238 [Pipeline] { 00:00:08.246 [Pipeline] catchError 00:00:08.247 [Pipeline] { 00:00:08.256 [Pipeline] wrap 00:00:08.263 [Pipeline] { 00:00:08.268 [Pipeline] stage 00:00:08.269 [Pipeline] { (Prologue) 00:00:08.283 [Pipeline] echo 00:00:08.284 Node: VM-host-SM9 00:00:08.288 [Pipeline] cleanWs 00:00:08.295 [WS-CLEANUP] Deleting project workspace... 00:00:08.296 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.302 [WS-CLEANUP] done 00:00:08.453 [Pipeline] setCustomBuildProperty 00:00:08.516 [Pipeline] httpRequest 00:00:08.529 [Pipeline] echo 00:00:08.531 Sorcerer 10.211.164.101 is alive 00:00:08.539 [Pipeline] httpRequest 00:00:08.543 HttpMethod: GET 00:00:08.544 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.544 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.555 Response Code: HTTP/1.1 200 OK 00:00:08.556 Success: Status code 200 is in the accepted range: 200,404 00:00:08.556 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:14.300 [Pipeline] sh 00:00:14.584 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:14.602 [Pipeline] httpRequest 00:00:14.625 [Pipeline] echo 00:00:14.627 Sorcerer 10.211.164.101 is alive 00:00:14.636 [Pipeline] httpRequest 00:00:14.640 HttpMethod: GET 00:00:14.641 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:14.642 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:14.657 Response Code: HTTP/1.1 200 OK 00:00:14.658 Success: Status code 200 is in the accepted range: 200,404 00:00:14.658 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:46.166 [Pipeline] sh 00:00:46.446 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:48.992 [Pipeline] sh 00:00:49.273 + git -C spdk log --oneline -n5 00:00:49.273 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:49.273 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:49.273 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:49.273 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:49.273 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:49.308 [Pipeline] writeFile 00:00:49.324 [Pipeline] sh 00:00:49.606 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:49.618 [Pipeline] sh 00:00:49.899 + cat autorun-spdk.conf 00:00:49.899 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.899 SPDK_TEST_NVME=1 00:00:49.899 SPDK_TEST_FTL=1 00:00:49.899 SPDK_TEST_ISAL=1 00:00:49.899 SPDK_RUN_ASAN=1 00:00:49.899 SPDK_RUN_UBSAN=1 00:00:49.899 SPDK_TEST_XNVME=1 00:00:49.899 SPDK_TEST_NVME_FDP=1 00:00:49.899 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:49.906 RUN_NIGHTLY=1 00:00:49.908 [Pipeline] } 00:00:49.924 [Pipeline] // stage 00:00:49.940 [Pipeline] stage 00:00:49.943 [Pipeline] { (Run VM) 00:00:49.957 [Pipeline] sh 00:00:50.237 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:50.237 + echo 'Start stage prepare_nvme.sh' 00:00:50.237 Start stage prepare_nvme.sh 00:00:50.237 + [[ -n 2 ]] 00:00:50.237 + disk_prefix=ex2 00:00:50.237 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:50.237 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:50.237 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:50.237 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.237 ++ SPDK_TEST_NVME=1 00:00:50.237 ++ SPDK_TEST_FTL=1 00:00:50.237 ++ SPDK_TEST_ISAL=1 00:00:50.237 ++ SPDK_RUN_ASAN=1 00:00:50.237 ++ SPDK_RUN_UBSAN=1 00:00:50.237 ++ SPDK_TEST_XNVME=1 00:00:50.237 ++ SPDK_TEST_NVME_FDP=1 00:00:50.237 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:50.237 ++ RUN_NIGHTLY=1 00:00:50.237 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:50.237 + nvme_files=() 00:00:50.237 + declare -A nvme_files 00:00:50.237 + backend_dir=/var/lib/libvirt/images/backends 00:00:50.237 + nvme_files['nvme.img']=5G 00:00:50.237 + nvme_files['nvme-cmb.img']=5G 00:00:50.237 + nvme_files['nvme-multi0.img']=4G 00:00:50.237 + nvme_files['nvme-multi1.img']=4G 00:00:50.237 + nvme_files['nvme-multi2.img']=4G 00:00:50.237 + nvme_files['nvme-openstack.img']=8G 00:00:50.237 + nvme_files['nvme-zns.img']=5G 00:00:50.237 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:50.237 + (( SPDK_TEST_FTL == 1 )) 00:00:50.237 + nvme_files["nvme-ftl.img"]=6G 00:00:50.237 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:50.237 + nvme_files["nvme-fdp.img"]=1G 00:00:50.237 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:50.237 + for nvme in "${!nvme_files[@]}" 00:00:50.237 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:50.237 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:50.237 + for nvme in "${!nvme_files[@]}" 00:00:50.237 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:00:50.496 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:50.496 + for nvme in "${!nvme_files[@]}" 00:00:50.496 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:50.496 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:50.496 + for nvme in "${!nvme_files[@]}" 00:00:50.496 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:50.496 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:50.496 + for nvme in "${!nvme_files[@]}" 00:00:50.496 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:50.496 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:50.496 + for nvme in "${!nvme_files[@]}" 00:00:50.496 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:50.755 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:50.755 + for nvme in "${!nvme_files[@]}" 00:00:50.755 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:51.013 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.013 + for nvme in "${!nvme_files[@]}" 00:00:51.013 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:00:51.013 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:51.013 + for nvme in "${!nvme_files[@]}" 00:00:51.013 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:51.013 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.013 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:51.013 + echo 'End stage prepare_nvme.sh' 00:00:51.013 End stage prepare_nvme.sh 00:00:51.024 [Pipeline] sh 00:00:51.303 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:51.303 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:00:51.562 00:00:51.562 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:51.562 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:51.562 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:51.562 HELP=0 00:00:51.562 DRY_RUN=0 00:00:51.562 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:00:51.562 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:51.562 NVME_AUTO_CREATE=0 00:00:51.562 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:00:51.562 NVME_CMB=,,,, 00:00:51.562 NVME_PMR=,,,, 00:00:51.562 NVME_ZNS=,,,, 00:00:51.562 NVME_MS=true,,,, 00:00:51.562 NVME_FDP=,,,on, 00:00:51.562 SPDK_VAGRANT_DISTRO=fedora38 00:00:51.562 SPDK_VAGRANT_VMCPU=10 00:00:51.562 SPDK_VAGRANT_VMRAM=12288 00:00:51.562 SPDK_VAGRANT_PROVIDER=libvirt 00:00:51.562 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:51.562 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:51.562 SPDK_OPENSTACK_NETWORK=0 00:00:51.562 VAGRANT_PACKAGE_BOX=0 00:00:51.562 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:51.562 FORCE_DISTRO=true 00:00:51.562 VAGRANT_BOX_VERSION= 00:00:51.562 EXTRA_VAGRANTFILES= 00:00:51.562 NIC_MODEL=e1000 00:00:51.562 00:00:51.562 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:00:51.562 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:54.873 Bringing machine 'default' up with 'libvirt' provider... 00:00:54.873 ==> default: Creating image (snapshot of base box volume). 00:00:55.132 ==> default: Creating domain with the following settings... 00:00:55.132 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720903928_0035f45baf583441bb2e 00:00:55.132 ==> default: -- Domain type: kvm 00:00:55.132 ==> default: -- Cpus: 10 00:00:55.132 ==> default: -- Feature: acpi 00:00:55.132 ==> default: -- Feature: apic 00:00:55.132 ==> default: -- Feature: pae 00:00:55.132 ==> default: -- Memory: 12288M 00:00:55.132 ==> default: -- Memory Backing: hugepages: 00:00:55.132 ==> default: -- Management MAC: 00:00:55.132 ==> default: -- Loader: 00:00:55.132 ==> default: -- Nvram: 00:00:55.132 ==> default: -- Base box: spdk/fedora38 00:00:55.132 ==> default: -- Storage pool: default 00:00:55.132 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720903928_0035f45baf583441bb2e.img (20G) 00:00:55.132 ==> default: -- Volume Cache: default 00:00:55.132 ==> default: -- Kernel: 00:00:55.132 ==> default: -- Initrd: 00:00:55.132 ==> default: -- Graphics Type: vnc 00:00:55.132 ==> default: -- Graphics Port: -1 00:00:55.132 ==> default: -- Graphics IP: 127.0.0.1 00:00:55.132 ==> default: -- Graphics Password: Not defined 00:00:55.132 ==> default: -- Video Type: cirrus 00:00:55.132 ==> default: -- Video VRAM: 9216 00:00:55.132 ==> default: -- Sound Type: 00:00:55.132 ==> default: -- Keymap: en-us 00:00:55.132 ==> default: -- TPM Path: 00:00:55.132 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:55.132 ==> default: -- Command line args: 00:00:55.132 ==> default: -> value=-device, 00:00:55.132 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:55.132 ==> default: -> value=-drive, 00:00:55.132 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:55.132 ==> default: -> value=-device, 00:00:55.132 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:55.132 ==> default: -> value=-device, 00:00:55.132 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:55.132 ==> default: -> value=-drive, 00:00:55.132 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:00:55.132 ==> default: -> value=-device, 00:00:55.132 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:55.132 ==> default: -> value=-device, 00:00:55.132 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:00:55.132 ==> default: -> value=-drive, 00:00:55.132 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:55.132 ==> default: -> value=-device, 00:00:55.132 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:55.132 ==> default: -> value=-drive, 00:00:55.132 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:55.132 ==> default: -> value=-device, 00:00:55.132 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:55.132 ==> default: -> value=-drive, 00:00:55.132 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:55.132 ==> default: -> value=-device, 00:00:55.132 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:55.132 ==> default: -> value=-device, 00:00:55.132 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:55.132 ==> default: -> value=-device, 00:00:55.132 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:00:55.133 ==> default: -> value=-drive, 00:00:55.133 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:55.133 ==> default: -> value=-device, 00:00:55.133 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:55.133 ==> default: Creating shared folders metadata... 00:00:55.133 ==> default: Starting domain. 00:00:56.540 ==> default: Waiting for domain to get an IP address... 00:01:14.629 ==> default: Waiting for SSH to become available... 00:01:16.005 ==> default: Configuring and enabling network interfaces... 00:01:20.196 default: SSH address: 192.168.121.67:22 00:01:20.196 default: SSH username: vagrant 00:01:20.196 default: SSH auth method: private key 00:01:22.124 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:30.237 ==> default: Mounting SSHFS shared folder... 00:01:31.612 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:31.612 ==> default: Checking Mount.. 00:01:32.549 ==> default: Folder Successfully Mounted! 00:01:32.549 ==> default: Running provisioner: file... 00:01:33.485 default: ~/.gitconfig => .gitconfig 00:01:33.744 00:01:33.744 SUCCESS! 00:01:33.744 00:01:33.744 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:33.744 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:33.744 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:33.744 00:01:33.753 [Pipeline] } 00:01:33.771 [Pipeline] // stage 00:01:33.779 [Pipeline] dir 00:01:33.780 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:33.781 [Pipeline] { 00:01:33.795 [Pipeline] catchError 00:01:33.797 [Pipeline] { 00:01:33.810 [Pipeline] sh 00:01:34.100 + vagrant ssh-config --host vagrant 00:01:34.101 + sed -ne /^Host/,$p 00:01:34.101 + tee ssh_conf 00:01:37.388 Host vagrant 00:01:37.388 HostName 192.168.121.67 00:01:37.388 User vagrant 00:01:37.388 Port 22 00:01:37.388 UserKnownHostsFile /dev/null 00:01:37.388 StrictHostKeyChecking no 00:01:37.388 PasswordAuthentication no 00:01:37.388 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:37.388 IdentitiesOnly yes 00:01:37.388 LogLevel FATAL 00:01:37.388 ForwardAgent yes 00:01:37.388 ForwardX11 yes 00:01:37.388 00:01:37.399 [Pipeline] withEnv 00:01:37.400 [Pipeline] { 00:01:37.409 [Pipeline] sh 00:01:37.686 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:37.686 source /etc/os-release 00:01:37.686 [[ -e /image.version ]] && img=$(< /image.version) 00:01:37.686 # Minimal, systemd-like check. 00:01:37.686 if [[ -e /.dockerenv ]]; then 00:01:37.686 # Clear garbage from the node's name: 00:01:37.686 # agt-er_autotest_547-896 -> autotest_547-896 00:01:37.686 # $HOSTNAME is the actual container id 00:01:37.686 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:37.686 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:37.686 # We can assume this is a mount from a host where container is running, 00:01:37.686 # so fetch its hostname to easily identify the target swarm worker. 00:01:37.686 container="$(< /etc/hostname) ($agent)" 00:01:37.686 else 00:01:37.686 # Fallback 00:01:37.686 container=$agent 00:01:37.686 fi 00:01:37.686 fi 00:01:37.686 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:37.686 00:01:37.957 [Pipeline] } 00:01:37.977 [Pipeline] // withEnv 00:01:37.985 [Pipeline] setCustomBuildProperty 00:01:38.000 [Pipeline] stage 00:01:38.002 [Pipeline] { (Tests) 00:01:38.019 [Pipeline] sh 00:01:38.303 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:38.574 [Pipeline] sh 00:01:38.853 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:39.125 [Pipeline] timeout 00:01:39.125 Timeout set to expire in 40 min 00:01:39.127 [Pipeline] { 00:01:39.141 [Pipeline] sh 00:01:39.420 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:39.986 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:39.999 [Pipeline] sh 00:01:40.277 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:40.551 [Pipeline] sh 00:01:40.886 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:40.931 [Pipeline] sh 00:01:41.204 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:41.462 ++ readlink -f spdk_repo 00:01:41.462 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:41.462 + [[ -n /home/vagrant/spdk_repo ]] 00:01:41.462 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:41.462 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:41.462 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:41.462 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:41.462 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:41.462 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:41.462 + cd /home/vagrant/spdk_repo 00:01:41.462 + source /etc/os-release 00:01:41.462 ++ NAME='Fedora Linux' 00:01:41.462 ++ VERSION='38 (Cloud Edition)' 00:01:41.462 ++ ID=fedora 00:01:41.462 ++ VERSION_ID=38 00:01:41.462 ++ VERSION_CODENAME= 00:01:41.462 ++ PLATFORM_ID=platform:f38 00:01:41.462 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:41.462 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:41.462 ++ LOGO=fedora-logo-icon 00:01:41.462 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:41.462 ++ HOME_URL=https://fedoraproject.org/ 00:01:41.462 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:41.462 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:41.462 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:41.462 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:41.462 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:41.462 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:41.462 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:41.462 ++ SUPPORT_END=2024-05-14 00:01:41.462 ++ VARIANT='Cloud Edition' 00:01:41.462 ++ VARIANT_ID=cloud 00:01:41.462 + uname -a 00:01:41.463 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:41.463 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:41.463 Hugepages 00:01:41.463 node hugesize free / total 00:01:41.463 node0 1048576kB 0 / 0 00:01:41.463 node0 2048kB 0 / 0 00:01:41.463 00:01:41.463 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:41.463 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:41.721 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:41.721 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:41.721 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:41.721 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:41.721 + rm -f /tmp/spdk-ld-path 00:01:41.721 + source autorun-spdk.conf 00:01:41.721 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.721 ++ SPDK_TEST_NVME=1 00:01:41.721 ++ SPDK_TEST_FTL=1 00:01:41.721 ++ SPDK_TEST_ISAL=1 00:01:41.721 ++ SPDK_RUN_ASAN=1 00:01:41.721 ++ SPDK_RUN_UBSAN=1 00:01:41.721 ++ SPDK_TEST_XNVME=1 00:01:41.721 ++ SPDK_TEST_NVME_FDP=1 00:01:41.721 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.721 ++ RUN_NIGHTLY=1 00:01:41.721 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:41.721 + [[ -n '' ]] 00:01:41.721 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:41.721 + for M in /var/spdk/build-*-manifest.txt 00:01:41.721 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:41.721 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:41.721 + for M in /var/spdk/build-*-manifest.txt 00:01:41.721 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:41.721 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:41.721 ++ uname 00:01:41.721 + [[ Linux == \L\i\n\u\x ]] 00:01:41.721 + sudo dmesg -T 00:01:41.721 + sudo dmesg --clear 00:01:41.721 + dmesg_pid=5157 00:01:41.721 + [[ Fedora Linux == FreeBSD ]] 00:01:41.721 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.721 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.721 + sudo dmesg -Tw 00:01:41.721 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:41.721 + [[ -x /usr/src/fio-static/fio ]] 00:01:41.721 + export FIO_BIN=/usr/src/fio-static/fio 00:01:41.721 + FIO_BIN=/usr/src/fio-static/fio 00:01:41.721 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:41.721 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:41.721 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:41.721 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.721 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.721 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:41.721 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.721 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.721 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:41.721 Test configuration: 00:01:41.721 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.721 SPDK_TEST_NVME=1 00:01:41.721 SPDK_TEST_FTL=1 00:01:41.721 SPDK_TEST_ISAL=1 00:01:41.721 SPDK_RUN_ASAN=1 00:01:41.721 SPDK_RUN_UBSAN=1 00:01:41.721 SPDK_TEST_XNVME=1 00:01:41.721 SPDK_TEST_NVME_FDP=1 00:01:41.721 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.980 RUN_NIGHTLY=1 20:52:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:41.980 20:52:55 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:41.980 20:52:55 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.980 20:52:55 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.980 20:52:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.980 20:52:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.981 20:52:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.981 20:52:55 -- paths/export.sh@5 -- $ export PATH 00:01:41.981 20:52:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.981 20:52:55 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:41.981 20:52:55 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:41.981 20:52:55 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720903975.XXXXXX 00:01:41.981 20:52:55 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720903975.MeAydK 00:01:41.981 20:52:55 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:41.981 20:52:55 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:41.981 20:52:55 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:41.981 20:52:55 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:41.981 20:52:55 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:41.981 20:52:55 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:41.981 20:52:55 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:41.981 20:52:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.981 20:52:55 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:41.981 20:52:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:41.981 20:52:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:41.981 20:52:55 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:41.981 20:52:55 -- spdk/autobuild.sh@16 -- $ date -u 00:01:41.981 Sat Jul 13 08:52:55 PM UTC 2024 00:01:41.981 20:52:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:41.981 LTS-59-g4b94202c6 00:01:41.981 20:52:55 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:41.981 20:52:55 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:41.981 20:52:55 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:41.981 20:52:55 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:41.981 20:52:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.981 ************************************ 00:01:41.981 START TEST asan 00:01:41.981 ************************************ 00:01:41.981 using asan 00:01:41.981 20:52:55 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:41.981 00:01:41.981 real 0m0.000s 00:01:41.981 user 0m0.000s 00:01:41.981 sys 0m0.000s 00:01:41.981 20:52:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.981 20:52:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.981 ************************************ 00:01:41.981 END TEST asan 00:01:41.981 ************************************ 00:01:41.981 20:52:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:41.981 20:52:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:41.981 20:52:55 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:41.981 20:52:55 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:41.981 20:52:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.981 ************************************ 00:01:41.981 START TEST ubsan 00:01:41.981 ************************************ 00:01:41.981 using ubsan 00:01:41.981 20:52:55 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:41.981 00:01:41.981 real 0m0.000s 00:01:41.981 user 0m0.000s 00:01:41.981 sys 0m0.000s 00:01:41.981 20:52:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.981 20:52:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.981 ************************************ 00:01:41.981 END TEST ubsan 00:01:41.981 ************************************ 00:01:41.981 20:52:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:41.981 20:52:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:41.981 20:52:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:41.981 20:52:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:41.981 20:52:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:41.981 20:52:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:41.981 20:52:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:41.981 20:52:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:41.981 20:52:55 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:42.239 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:42.239 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:42.498 Using 'verbs' RDMA provider 00:01:58.319 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:10.518 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:10.518 Creating mk/config.mk...done. 00:02:10.518 Creating mk/cc.flags.mk...done. 00:02:10.518 Type 'make' to build. 00:02:10.518 20:53:23 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:10.518 20:53:23 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:10.518 20:53:23 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:10.518 20:53:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.518 ************************************ 00:02:10.518 START TEST make 00:02:10.518 ************************************ 00:02:10.518 20:53:23 -- common/autotest_common.sh@1104 -- $ make -j10 00:02:10.518 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:10.518 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:10.518 meson setup builddir \ 00:02:10.518 -Dwith-libaio=enabled \ 00:02:10.518 -Dwith-liburing=enabled \ 00:02:10.518 -Dwith-libvfn=disabled \ 00:02:10.518 -Dwith-spdk=false && \ 00:02:10.518 meson compile -C builddir && \ 00:02:10.518 cd -) 00:02:10.518 make[1]: Nothing to be done for 'all'. 00:02:12.422 The Meson build system 00:02:12.422 Version: 1.3.1 00:02:12.422 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:12.422 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:12.422 Build type: native build 00:02:12.422 Project name: xnvme 00:02:12.422 Project version: 0.7.3 00:02:12.422 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:12.422 C linker for the host machine: cc ld.bfd 2.39-16 00:02:12.422 Host machine cpu family: x86_64 00:02:12.422 Host machine cpu: x86_64 00:02:12.423 Message: host_machine.system: linux 00:02:12.423 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:12.423 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:12.423 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:12.423 Run-time dependency threads found: YES 00:02:12.423 Has header "setupapi.h" : NO 00:02:12.423 Has header "linux/blkzoned.h" : YES 00:02:12.423 Has header "linux/blkzoned.h" : YES (cached) 00:02:12.423 Has header "libaio.h" : YES 00:02:12.423 Library aio found: YES 00:02:12.423 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:12.423 Run-time dependency liburing found: YES 2.2 00:02:12.423 Dependency libvfn skipped: feature with-libvfn disabled 00:02:12.423 Run-time dependency appleframeworks found: NO (tried framework) 00:02:12.423 Run-time dependency appleframeworks found: NO (tried framework) 00:02:12.423 Configuring xnvme_config.h using configuration 00:02:12.423 Configuring xnvme.spec using configuration 00:02:12.423 Run-time dependency bash-completion found: YES 2.11 00:02:12.423 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:12.423 Program cp found: YES (/usr/bin/cp) 00:02:12.423 Has header "winsock2.h" : NO 00:02:12.423 Has header "dbghelp.h" : NO 00:02:12.423 Library rpcrt4 found: NO 00:02:12.423 Library rt found: YES 00:02:12.423 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:12.423 Found CMake: /usr/bin/cmake (3.27.7) 00:02:12.423 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:12.423 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:12.423 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:12.423 Build targets in project: 32 00:02:12.423 00:02:12.423 xnvme 0.7.3 00:02:12.423 00:02:12.423 User defined options 00:02:12.423 with-libaio : enabled 00:02:12.423 with-liburing: enabled 00:02:12.423 with-libvfn : disabled 00:02:12.423 with-spdk : false 00:02:12.423 00:02:12.423 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.990 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:12.990 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:12.990 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:12.990 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:12.990 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:12.990 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:12.990 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:12.990 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:12.990 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:12.990 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:12.990 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:12.990 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:12.990 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:12.990 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:13.249 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:13.249 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:13.249 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:13.249 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:13.249 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:13.249 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:13.249 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:13.249 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:13.249 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:13.249 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:13.249 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:13.249 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:13.249 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:13.249 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:13.249 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:13.249 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:13.249 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:13.249 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:13.249 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:13.249 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:13.249 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:13.249 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:13.249 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:13.509 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:13.509 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:13.509 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:13.509 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:13.509 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:13.509 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:13.509 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:13.509 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:13.509 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:13.509 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:13.509 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:13.509 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:13.509 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:13.509 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:13.509 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:13.509 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:13.509 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:13.509 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:13.509 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:13.509 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:13.509 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:13.509 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:13.509 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:13.509 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:13.509 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:13.768 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:13.768 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:13.768 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:13.768 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:13.768 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:13.768 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:13.768 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:13.768 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:13.768 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:13.768 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:13.768 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:13.768 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:13.768 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:13.768 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:13.768 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:13.768 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:14.027 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:14.027 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:14.027 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:14.027 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:14.027 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:14.027 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:14.027 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:14.027 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:14.027 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:14.027 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:14.027 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:14.027 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:14.027 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:14.027 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:14.027 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:14.027 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:14.287 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:14.287 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:14.287 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:14.287 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:14.287 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:14.287 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:14.287 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:14.287 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:14.287 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:14.287 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:14.287 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:14.287 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:14.287 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:14.287 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:14.287 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:14.287 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:14.287 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:14.287 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:14.287 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:14.287 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:14.287 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:14.287 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:14.287 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:14.287 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:14.287 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:14.287 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:14.287 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:14.287 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:14.287 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:14.546 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:14.546 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:14.546 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:14.546 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:14.546 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:14.546 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:14.546 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:14.546 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:14.546 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:14.546 [132/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:14.546 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:14.546 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:14.546 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:14.546 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:14.546 [137/203] Linking target lib/libxnvme.so 00:02:14.546 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:14.546 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:14.546 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:14.806 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:14.806 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:14.806 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:14.806 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:14.806 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:14.806 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:14.806 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:14.806 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:14.806 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:14.806 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:14.806 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:14.806 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:15.067 [153/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:15.067 [154/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:15.067 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:15.067 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:15.067 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:15.067 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:15.067 [159/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:15.067 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:15.067 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:15.067 [162/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:15.067 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:15.067 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:15.067 [165/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:15.067 [166/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:15.067 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:15.326 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:15.326 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:15.326 [170/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:15.326 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:15.326 [172/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:15.585 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:15.585 [174/203] Linking static target lib/libxnvme.a 00:02:15.585 [175/203] Linking target tests/xnvme_tests_cli 00:02:15.585 [176/203] Linking target tests/xnvme_tests_async_intf 00:02:15.585 [177/203] Linking target tests/xnvme_tests_xnvme_file 00:02:15.585 [178/203] Linking target tests/xnvme_tests_ioworker 00:02:15.585 [179/203] Linking target tests/xnvme_tests_lblk 00:02:15.585 [180/203] Linking target tests/xnvme_tests_scc 00:02:15.585 [181/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:15.585 [182/203] Linking target tests/xnvme_tests_enum 00:02:15.585 [183/203] Linking target tests/xnvme_tests_buf 00:02:15.585 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:15.585 [185/203] Linking target tests/xnvme_tests_znd_append 00:02:15.585 [186/203] Linking target tests/xnvme_tests_map 00:02:15.585 [187/203] Linking target tests/xnvme_tests_znd_state 00:02:15.585 [188/203] Linking target tools/xdd 00:02:15.585 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:15.585 [190/203] Linking target tools/lblk 00:02:15.585 [191/203] Linking target tools/xnvme 00:02:15.585 [192/203] Linking target tests/xnvme_tests_kvs 00:02:15.585 [193/203] Linking target tools/zoned 00:02:15.585 [194/203] Linking target examples/xnvme_enum 00:02:15.585 [195/203] Linking target tools/xnvme_file 00:02:15.585 [196/203] Linking target examples/xnvme_dev 00:02:15.585 [197/203] Linking target tools/kvs 00:02:15.585 [198/203] Linking target examples/xnvme_hello 00:02:15.585 [199/203] Linking target examples/xnvme_io_async 00:02:15.585 [200/203] Linking target examples/xnvme_single_async 00:02:15.844 [201/203] Linking target examples/zoned_io_sync 00:02:15.844 [202/203] Linking target examples/zoned_io_async 00:02:15.844 [203/203] Linking target examples/xnvme_single_sync 00:02:15.844 INFO: autodetecting backend as ninja 00:02:15.844 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:15.844 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:23.958 The Meson build system 00:02:23.958 Version: 1.3.1 00:02:23.958 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:23.958 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:23.958 Build type: native build 00:02:23.958 Program cat found: YES (/usr/bin/cat) 00:02:23.958 Project name: DPDK 00:02:23.958 Project version: 23.11.0 00:02:23.958 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:23.958 C linker for the host machine: cc ld.bfd 2.39-16 00:02:23.958 Host machine cpu family: x86_64 00:02:23.958 Host machine cpu: x86_64 00:02:23.958 Message: ## Building in Developer Mode ## 00:02:23.958 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.958 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:23.958 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.958 Program python3 found: YES (/usr/bin/python3) 00:02:23.958 Program cat found: YES (/usr/bin/cat) 00:02:23.958 Compiler for C supports arguments -march=native: YES 00:02:23.958 Checking for size of "void *" : 8 00:02:23.958 Checking for size of "void *" : 8 (cached) 00:02:23.958 Library m found: YES 00:02:23.958 Library numa found: YES 00:02:23.958 Has header "numaif.h" : YES 00:02:23.958 Library fdt found: NO 00:02:23.958 Library execinfo found: NO 00:02:23.958 Has header "execinfo.h" : YES 00:02:23.958 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:23.958 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.958 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.958 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.958 Run-time dependency openssl found: YES 3.0.9 00:02:23.958 Run-time dependency libpcap found: YES 1.10.4 00:02:23.958 Has header "pcap.h" with dependency libpcap: YES 00:02:23.958 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.958 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.958 Compiler for C supports arguments -Wformat: YES 00:02:23.958 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.958 Compiler for C supports arguments -Wformat-security: NO 00:02:23.958 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.958 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.958 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.958 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.958 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.958 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.958 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.958 Compiler for C supports arguments -Wundef: YES 00:02:23.958 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.958 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.958 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.958 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.958 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.958 Program objdump found: YES (/usr/bin/objdump) 00:02:23.958 Compiler for C supports arguments -mavx512f: YES 00:02:23.958 Checking if "AVX512 checking" compiles: YES 00:02:23.958 Fetching value of define "__SSE4_2__" : 1 00:02:23.958 Fetching value of define "__AES__" : 1 00:02:23.958 Fetching value of define "__AVX__" : 1 00:02:23.958 Fetching value of define "__AVX2__" : 1 00:02:23.958 Fetching value of define "__AVX512BW__" : (undefined) 00:02:23.958 Fetching value of define "__AVX512CD__" : (undefined) 00:02:23.958 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:23.958 Fetching value of define "__AVX512F__" : (undefined) 00:02:23.958 Fetching value of define "__AVX512VL__" : (undefined) 00:02:23.958 Fetching value of define "__PCLMUL__" : 1 00:02:23.958 Fetching value of define "__RDRND__" : 1 00:02:23.958 Fetching value of define "__RDSEED__" : 1 00:02:23.958 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:23.958 Fetching value of define "__znver1__" : (undefined) 00:02:23.958 Fetching value of define "__znver2__" : (undefined) 00:02:23.958 Fetching value of define "__znver3__" : (undefined) 00:02:23.958 Fetching value of define "__znver4__" : (undefined) 00:02:23.958 Library asan found: YES 00:02:23.958 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.958 Message: lib/log: Defining dependency "log" 00:02:23.958 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.958 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.958 Library rt found: YES 00:02:23.958 Checking for function "getentropy" : NO 00:02:23.958 Message: lib/eal: Defining dependency "eal" 00:02:23.958 Message: lib/ring: Defining dependency "ring" 00:02:23.958 Message: lib/rcu: Defining dependency "rcu" 00:02:23.958 Message: lib/mempool: Defining dependency "mempool" 00:02:23.958 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.958 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.958 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:23.958 Compiler for C supports arguments -mpclmul: YES 00:02:23.958 Compiler for C supports arguments -maes: YES 00:02:23.958 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.958 Compiler for C supports arguments -mavx512bw: YES 00:02:23.958 Compiler for C supports arguments -mavx512dq: YES 00:02:23.958 Compiler for C supports arguments -mavx512vl: YES 00:02:23.958 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.958 Compiler for C supports arguments -mavx2: YES 00:02:23.958 Compiler for C supports arguments -mavx: YES 00:02:23.958 Message: lib/net: Defining dependency "net" 00:02:23.958 Message: lib/meter: Defining dependency "meter" 00:02:23.958 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.958 Message: lib/pci: Defining dependency "pci" 00:02:23.958 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.958 Message: lib/hash: Defining dependency "hash" 00:02:23.958 Message: lib/timer: Defining dependency "timer" 00:02:23.959 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.959 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.959 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.959 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.959 Message: lib/power: Defining dependency "power" 00:02:23.959 Message: lib/reorder: Defining dependency "reorder" 00:02:23.959 Message: lib/security: Defining dependency "security" 00:02:23.959 Has header "linux/userfaultfd.h" : YES 00:02:23.959 Has header "linux/vduse.h" : YES 00:02:23.959 Message: lib/vhost: Defining dependency "vhost" 00:02:23.959 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.959 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.959 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.959 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.959 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:23.959 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:23.959 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:23.959 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:23.959 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:23.959 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:23.959 Program doxygen found: YES (/usr/bin/doxygen) 00:02:23.959 Configuring doxy-api-html.conf using configuration 00:02:23.959 Configuring doxy-api-man.conf using configuration 00:02:23.959 Program mandb found: YES (/usr/bin/mandb) 00:02:23.959 Program sphinx-build found: NO 00:02:23.959 Configuring rte_build_config.h using configuration 00:02:23.959 Message: 00:02:23.959 ================= 00:02:23.959 Applications Enabled 00:02:23.959 ================= 00:02:23.959 00:02:23.959 apps: 00:02:23.959 00:02:23.959 00:02:23.959 Message: 00:02:23.959 ================= 00:02:23.959 Libraries Enabled 00:02:23.959 ================= 00:02:23.959 00:02:23.959 libs: 00:02:23.959 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.959 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:23.959 cryptodev, dmadev, power, reorder, security, vhost, 00:02:23.959 00:02:23.959 Message: 00:02:23.959 =============== 00:02:23.959 Drivers Enabled 00:02:23.959 =============== 00:02:23.959 00:02:23.959 common: 00:02:23.959 00:02:23.959 bus: 00:02:23.959 pci, vdev, 00:02:23.959 mempool: 00:02:23.959 ring, 00:02:23.959 dma: 00:02:23.959 00:02:23.959 net: 00:02:23.959 00:02:23.959 crypto: 00:02:23.959 00:02:23.959 compress: 00:02:23.959 00:02:23.959 vdpa: 00:02:23.959 00:02:23.959 00:02:23.959 Message: 00:02:23.959 ================= 00:02:23.959 Content Skipped 00:02:23.959 ================= 00:02:23.959 00:02:23.959 apps: 00:02:23.959 dumpcap: explicitly disabled via build config 00:02:23.959 graph: explicitly disabled via build config 00:02:23.959 pdump: explicitly disabled via build config 00:02:23.959 proc-info: explicitly disabled via build config 00:02:23.959 test-acl: explicitly disabled via build config 00:02:23.959 test-bbdev: explicitly disabled via build config 00:02:23.959 test-cmdline: explicitly disabled via build config 00:02:23.959 test-compress-perf: explicitly disabled via build config 00:02:23.959 test-crypto-perf: explicitly disabled via build config 00:02:23.959 test-dma-perf: explicitly disabled via build config 00:02:23.959 test-eventdev: explicitly disabled via build config 00:02:23.959 test-fib: explicitly disabled via build config 00:02:23.959 test-flow-perf: explicitly disabled via build config 00:02:23.959 test-gpudev: explicitly disabled via build config 00:02:23.959 test-mldev: explicitly disabled via build config 00:02:23.959 test-pipeline: explicitly disabled via build config 00:02:23.959 test-pmd: explicitly disabled via build config 00:02:23.959 test-regex: explicitly disabled via build config 00:02:23.959 test-sad: explicitly disabled via build config 00:02:23.959 test-security-perf: explicitly disabled via build config 00:02:23.959 00:02:23.959 libs: 00:02:23.959 metrics: explicitly disabled via build config 00:02:23.959 acl: explicitly disabled via build config 00:02:23.959 bbdev: explicitly disabled via build config 00:02:23.959 bitratestats: explicitly disabled via build config 00:02:23.959 bpf: explicitly disabled via build config 00:02:23.959 cfgfile: explicitly disabled via build config 00:02:23.959 distributor: explicitly disabled via build config 00:02:23.959 efd: explicitly disabled via build config 00:02:23.959 eventdev: explicitly disabled via build config 00:02:23.959 dispatcher: explicitly disabled via build config 00:02:23.959 gpudev: explicitly disabled via build config 00:02:23.959 gro: explicitly disabled via build config 00:02:23.959 gso: explicitly disabled via build config 00:02:23.959 ip_frag: explicitly disabled via build config 00:02:23.959 jobstats: explicitly disabled via build config 00:02:23.959 latencystats: explicitly disabled via build config 00:02:23.959 lpm: explicitly disabled via build config 00:02:23.959 member: explicitly disabled via build config 00:02:23.959 pcapng: explicitly disabled via build config 00:02:23.959 rawdev: explicitly disabled via build config 00:02:23.959 regexdev: explicitly disabled via build config 00:02:23.959 mldev: explicitly disabled via build config 00:02:23.959 rib: explicitly disabled via build config 00:02:23.959 sched: explicitly disabled via build config 00:02:23.959 stack: explicitly disabled via build config 00:02:23.959 ipsec: explicitly disabled via build config 00:02:23.959 pdcp: explicitly disabled via build config 00:02:23.959 fib: explicitly disabled via build config 00:02:23.959 port: explicitly disabled via build config 00:02:23.959 pdump: explicitly disabled via build config 00:02:23.959 table: explicitly disabled via build config 00:02:23.959 pipeline: explicitly disabled via build config 00:02:23.959 graph: explicitly disabled via build config 00:02:23.959 node: explicitly disabled via build config 00:02:23.959 00:02:23.959 drivers: 00:02:23.959 common/cpt: not in enabled drivers build config 00:02:23.959 common/dpaax: not in enabled drivers build config 00:02:23.959 common/iavf: not in enabled drivers build config 00:02:23.959 common/idpf: not in enabled drivers build config 00:02:23.959 common/mvep: not in enabled drivers build config 00:02:23.959 common/octeontx: not in enabled drivers build config 00:02:23.959 bus/auxiliary: not in enabled drivers build config 00:02:23.959 bus/cdx: not in enabled drivers build config 00:02:23.959 bus/dpaa: not in enabled drivers build config 00:02:23.959 bus/fslmc: not in enabled drivers build config 00:02:23.959 bus/ifpga: not in enabled drivers build config 00:02:23.959 bus/platform: not in enabled drivers build config 00:02:23.959 bus/vmbus: not in enabled drivers build config 00:02:23.959 common/cnxk: not in enabled drivers build config 00:02:23.959 common/mlx5: not in enabled drivers build config 00:02:23.959 common/nfp: not in enabled drivers build config 00:02:23.959 common/qat: not in enabled drivers build config 00:02:23.959 common/sfc_efx: not in enabled drivers build config 00:02:23.959 mempool/bucket: not in enabled drivers build config 00:02:23.959 mempool/cnxk: not in enabled drivers build config 00:02:23.959 mempool/dpaa: not in enabled drivers build config 00:02:23.959 mempool/dpaa2: not in enabled drivers build config 00:02:23.959 mempool/octeontx: not in enabled drivers build config 00:02:23.959 mempool/stack: not in enabled drivers build config 00:02:23.959 dma/cnxk: not in enabled drivers build config 00:02:23.959 dma/dpaa: not in enabled drivers build config 00:02:23.959 dma/dpaa2: not in enabled drivers build config 00:02:23.959 dma/hisilicon: not in enabled drivers build config 00:02:23.959 dma/idxd: not in enabled drivers build config 00:02:23.959 dma/ioat: not in enabled drivers build config 00:02:23.959 dma/skeleton: not in enabled drivers build config 00:02:23.959 net/af_packet: not in enabled drivers build config 00:02:23.959 net/af_xdp: not in enabled drivers build config 00:02:23.959 net/ark: not in enabled drivers build config 00:02:23.959 net/atlantic: not in enabled drivers build config 00:02:23.959 net/avp: not in enabled drivers build config 00:02:23.959 net/axgbe: not in enabled drivers build config 00:02:23.959 net/bnx2x: not in enabled drivers build config 00:02:23.959 net/bnxt: not in enabled drivers build config 00:02:23.959 net/bonding: not in enabled drivers build config 00:02:23.959 net/cnxk: not in enabled drivers build config 00:02:23.959 net/cpfl: not in enabled drivers build config 00:02:23.959 net/cxgbe: not in enabled drivers build config 00:02:23.959 net/dpaa: not in enabled drivers build config 00:02:23.959 net/dpaa2: not in enabled drivers build config 00:02:23.959 net/e1000: not in enabled drivers build config 00:02:23.959 net/ena: not in enabled drivers build config 00:02:23.959 net/enetc: not in enabled drivers build config 00:02:23.959 net/enetfec: not in enabled drivers build config 00:02:23.959 net/enic: not in enabled drivers build config 00:02:23.959 net/failsafe: not in enabled drivers build config 00:02:23.959 net/fm10k: not in enabled drivers build config 00:02:23.959 net/gve: not in enabled drivers build config 00:02:23.959 net/hinic: not in enabled drivers build config 00:02:23.959 net/hns3: not in enabled drivers build config 00:02:23.959 net/i40e: not in enabled drivers build config 00:02:23.959 net/iavf: not in enabled drivers build config 00:02:23.959 net/ice: not in enabled drivers build config 00:02:23.959 net/idpf: not in enabled drivers build config 00:02:23.959 net/igc: not in enabled drivers build config 00:02:23.959 net/ionic: not in enabled drivers build config 00:02:23.959 net/ipn3ke: not in enabled drivers build config 00:02:23.959 net/ixgbe: not in enabled drivers build config 00:02:23.959 net/mana: not in enabled drivers build config 00:02:23.959 net/memif: not in enabled drivers build config 00:02:23.959 net/mlx4: not in enabled drivers build config 00:02:23.959 net/mlx5: not in enabled drivers build config 00:02:23.959 net/mvneta: not in enabled drivers build config 00:02:23.959 net/mvpp2: not in enabled drivers build config 00:02:23.959 net/netvsc: not in enabled drivers build config 00:02:23.959 net/nfb: not in enabled drivers build config 00:02:23.959 net/nfp: not in enabled drivers build config 00:02:23.959 net/ngbe: not in enabled drivers build config 00:02:23.960 net/null: not in enabled drivers build config 00:02:23.960 net/octeontx: not in enabled drivers build config 00:02:23.960 net/octeon_ep: not in enabled drivers build config 00:02:23.960 net/pcap: not in enabled drivers build config 00:02:23.960 net/pfe: not in enabled drivers build config 00:02:23.960 net/qede: not in enabled drivers build config 00:02:23.960 net/ring: not in enabled drivers build config 00:02:23.960 net/sfc: not in enabled drivers build config 00:02:23.960 net/softnic: not in enabled drivers build config 00:02:23.960 net/tap: not in enabled drivers build config 00:02:23.960 net/thunderx: not in enabled drivers build config 00:02:23.960 net/txgbe: not in enabled drivers build config 00:02:23.960 net/vdev_netvsc: not in enabled drivers build config 00:02:23.960 net/vhost: not in enabled drivers build config 00:02:23.960 net/virtio: not in enabled drivers build config 00:02:23.960 net/vmxnet3: not in enabled drivers build config 00:02:23.960 raw/*: missing internal dependency, "rawdev" 00:02:23.960 crypto/armv8: not in enabled drivers build config 00:02:23.960 crypto/bcmfs: not in enabled drivers build config 00:02:23.960 crypto/caam_jr: not in enabled drivers build config 00:02:23.960 crypto/ccp: not in enabled drivers build config 00:02:23.960 crypto/cnxk: not in enabled drivers build config 00:02:23.960 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.960 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.960 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.960 crypto/mlx5: not in enabled drivers build config 00:02:23.960 crypto/mvsam: not in enabled drivers build config 00:02:23.960 crypto/nitrox: not in enabled drivers build config 00:02:23.960 crypto/null: not in enabled drivers build config 00:02:23.960 crypto/octeontx: not in enabled drivers build config 00:02:23.960 crypto/openssl: not in enabled drivers build config 00:02:23.960 crypto/scheduler: not in enabled drivers build config 00:02:23.960 crypto/uadk: not in enabled drivers build config 00:02:23.960 crypto/virtio: not in enabled drivers build config 00:02:23.960 compress/isal: not in enabled drivers build config 00:02:23.960 compress/mlx5: not in enabled drivers build config 00:02:23.960 compress/octeontx: not in enabled drivers build config 00:02:23.960 compress/zlib: not in enabled drivers build config 00:02:23.960 regex/*: missing internal dependency, "regexdev" 00:02:23.960 ml/*: missing internal dependency, "mldev" 00:02:23.960 vdpa/ifc: not in enabled drivers build config 00:02:23.960 vdpa/mlx5: not in enabled drivers build config 00:02:23.960 vdpa/nfp: not in enabled drivers build config 00:02:23.960 vdpa/sfc: not in enabled drivers build config 00:02:23.960 event/*: missing internal dependency, "eventdev" 00:02:23.960 baseband/*: missing internal dependency, "bbdev" 00:02:23.960 gpu/*: missing internal dependency, "gpudev" 00:02:23.960 00:02:23.960 00:02:23.960 Build targets in project: 85 00:02:23.960 00:02:23.960 DPDK 23.11.0 00:02:23.960 00:02:23.960 User defined options 00:02:23.960 buildtype : debug 00:02:23.960 default_library : shared 00:02:23.960 libdir : lib 00:02:23.960 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:23.960 b_sanitize : address 00:02:23.960 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:23.960 c_link_args : 00:02:23.960 cpu_instruction_set: native 00:02:23.960 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:23.960 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:23.960 enable_docs : false 00:02:23.960 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:23.960 enable_kmods : false 00:02:23.960 tests : false 00:02:23.960 00:02:23.960 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:24.218 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:24.218 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:24.218 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:24.218 [3/265] Linking static target lib/librte_kvargs.a 00:02:24.218 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.218 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:24.218 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.476 [7/265] Linking static target lib/librte_log.a 00:02:24.476 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:24.476 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:24.476 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:24.734 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.992 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:25.251 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:25.251 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:25.251 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:25.251 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.251 [17/265] Linking static target lib/librte_telemetry.a 00:02:25.251 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:25.251 [19/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.510 [20/265] Linking target lib/librte_log.so.24.0 00:02:25.510 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:25.510 [22/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:25.510 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:25.768 [24/265] Linking target lib/librte_kvargs.so.24.0 00:02:25.768 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:25.768 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:25.768 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:26.026 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:26.026 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:26.026 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:26.026 [31/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.026 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:26.284 [33/265] Linking target lib/librte_telemetry.so.24.0 00:02:26.284 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:26.284 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:26.541 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:26.541 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:26.541 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:26.541 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:26.541 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:26.800 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:26.800 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:26.800 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:26.800 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:26.800 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:27.058 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:27.316 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:27.316 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:27.316 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:27.316 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:27.574 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:27.574 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:27.574 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:27.832 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:27.832 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:27.832 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:27.832 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:27.832 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:28.091 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:28.091 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:28.091 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:28.349 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:28.349 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:28.349 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:28.349 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:28.615 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:28.615 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:28.911 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:28.911 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:28.911 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:28.911 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:28.911 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:28.911 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:28.911 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:28.911 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:28.911 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:29.175 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:29.175 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:29.175 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:29.433 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:29.433 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:29.690 [82/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:29.690 [83/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:29.690 [84/265] Linking static target lib/librte_ring.a 00:02:29.690 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:29.690 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:29.948 [87/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:29.948 [88/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:29.948 [89/265] Linking static target lib/librte_rcu.a 00:02:29.948 [90/265] Linking static target lib/librte_eal.a 00:02:29.948 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:29.948 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:30.206 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:30.206 [94/265] Linking static target lib/librte_mempool.a 00:02:30.206 [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.464 [96/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.464 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:30.464 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:30.723 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:30.723 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:30.723 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:30.981 [102/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:30.981 [103/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:31.239 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:31.239 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:31.239 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:31.239 [107/265] Linking static target lib/librte_mbuf.a 00:02:31.239 [108/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.497 [109/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:31.497 [110/265] Linking static target lib/librte_net.a 00:02:31.497 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:31.497 [112/265] Linking static target lib/librte_meter.a 00:02:31.756 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:31.756 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:31.756 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:31.756 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.014 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.014 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:32.272 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:32.272 [120/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.530 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:32.530 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:32.530 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:32.789 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:32.789 [125/265] Linking static target lib/librte_pci.a 00:02:33.047 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:33.047 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:33.047 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:33.047 [129/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.047 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:33.306 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:33.306 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:33.306 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:33.306 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:33.306 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:33.306 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:33.564 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:33.564 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:33.564 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:33.564 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:33.564 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:33.564 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:33.822 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:33.822 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:33.822 [145/265] Linking static target lib/librte_cmdline.a 00:02:33.822 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:34.080 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:34.338 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:34.338 [149/265] Linking static target lib/librte_timer.a 00:02:34.338 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:34.338 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:34.596 [152/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:34.854 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:34.854 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:34.854 [155/265] Linking static target lib/librte_compressdev.a 00:02:34.854 [156/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.854 [157/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:35.112 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:35.112 [159/265] Linking static target lib/librte_hash.a 00:02:35.112 [160/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:35.112 [161/265] Linking static target lib/librte_ethdev.a 00:02:35.112 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:35.370 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:35.370 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:35.370 [165/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.370 [166/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:35.371 [167/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:35.371 [168/265] Linking static target lib/librte_dmadev.a 00:02:35.629 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:35.629 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.629 [171/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:35.886 [172/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.886 [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:36.144 [174/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:36.144 [175/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.144 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:36.144 [177/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:36.144 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:36.144 [179/265] Linking static target lib/librte_cryptodev.a 00:02:36.144 [180/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:36.144 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:36.711 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:36.711 [183/265] Linking static target lib/librte_power.a 00:02:36.711 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:36.711 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:36.711 [186/265] Linking static target lib/librte_reorder.a 00:02:36.711 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:36.969 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:36.969 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:36.969 [190/265] Linking static target lib/librte_security.a 00:02:37.227 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.227 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:37.485 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.485 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.485 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:37.743 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:38.001 [197/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.001 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:38.001 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:38.001 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:38.001 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:38.001 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:38.259 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:38.518 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:38.518 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:38.518 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:38.518 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:38.518 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:38.776 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:38.776 [210/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.776 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.776 [212/265] Linking static target drivers/librte_bus_vdev.a 00:02:38.776 [213/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:38.776 [214/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.776 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.776 [216/265] Linking static target drivers/librte_bus_pci.a 00:02:38.776 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:38.776 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:39.034 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.034 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:39.034 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.035 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.035 [223/265] Linking static target drivers/librte_mempool_ring.a 00:02:39.293 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.856 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.114 [226/265] Linking target lib/librte_eal.so.24.0 00:02:40.114 [227/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:40.114 [228/265] Linking target lib/librte_pci.so.24.0 00:02:40.114 [229/265] Linking target lib/librte_meter.so.24.0 00:02:40.114 [230/265] Linking target lib/librte_ring.so.24.0 00:02:40.114 [231/265] Linking target lib/librte_dmadev.so.24.0 00:02:40.114 [232/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:40.114 [233/265] Linking target lib/librte_timer.so.24.0 00:02:40.372 [234/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:40.372 [235/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:40.372 [236/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:40.372 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:40.372 [238/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:40.372 [239/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:40.372 [240/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:40.372 [241/265] Linking target lib/librte_rcu.so.24.0 00:02:40.372 [242/265] Linking target lib/librte_mempool.so.24.0 00:02:40.630 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:40.630 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:40.630 [245/265] Linking target lib/librte_mbuf.so.24.0 00:02:40.630 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:40.888 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:40.888 [248/265] Linking target lib/librte_compressdev.so.24.0 00:02:40.888 [249/265] Linking target lib/librte_reorder.so.24.0 00:02:40.888 [250/265] Linking target lib/librte_net.so.24.0 00:02:40.888 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:02:40.888 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:41.146 [253/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:41.146 [254/265] Linking target lib/librte_hash.so.24.0 00:02:41.146 [255/265] Linking target lib/librte_cmdline.so.24.0 00:02:41.146 [256/265] Linking target lib/librte_security.so.24.0 00:02:41.146 [257/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:41.404 [258/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.404 [259/265] Linking target lib/librte_ethdev.so.24.0 00:02:41.662 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:41.662 [261/265] Linking target lib/librte_power.so.24.0 00:02:44.194 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:44.452 [263/265] Linking static target lib/librte_vhost.a 00:02:46.419 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.419 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:46.419 INFO: autodetecting backend as ninja 00:02:46.419 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:47.353 CC lib/ut/ut.o 00:02:47.353 CC lib/log/log.o 00:02:47.353 CC lib/log/log_flags.o 00:02:47.353 CC lib/log/log_deprecated.o 00:02:47.353 CC lib/ut_mock/mock.o 00:02:47.353 LIB libspdk_log.a 00:02:47.353 LIB libspdk_ut_mock.a 00:02:47.353 LIB libspdk_ut.a 00:02:47.353 SO libspdk_log.so.6.1 00:02:47.353 SO libspdk_ut.so.1.0 00:02:47.353 SO libspdk_ut_mock.so.5.0 00:02:47.610 SYMLINK libspdk_ut.so 00:02:47.610 SYMLINK libspdk_ut_mock.so 00:02:47.610 SYMLINK libspdk_log.so 00:02:47.610 CC lib/ioat/ioat.o 00:02:47.610 CC lib/dma/dma.o 00:02:47.610 CXX lib/trace_parser/trace.o 00:02:47.610 CC lib/util/base64.o 00:02:47.610 CC lib/util/bit_array.o 00:02:47.610 CC lib/util/cpuset.o 00:02:47.610 CC lib/util/crc32.o 00:02:47.610 CC lib/util/crc16.o 00:02:47.610 CC lib/util/crc32c.o 00:02:47.610 CC lib/vfio_user/host/vfio_user_pci.o 00:02:47.867 CC lib/util/crc32_ieee.o 00:02:47.867 CC lib/util/crc64.o 00:02:47.867 CC lib/util/dif.o 00:02:47.867 LIB libspdk_dma.a 00:02:47.867 CC lib/util/fd.o 00:02:47.867 SO libspdk_dma.so.3.0 00:02:47.867 CC lib/util/file.o 00:02:47.867 CC lib/util/hexlify.o 00:02:47.867 CC lib/util/iov.o 00:02:48.124 SYMLINK libspdk_dma.so 00:02:48.124 CC lib/util/math.o 00:02:48.124 LIB libspdk_ioat.a 00:02:48.124 CC lib/util/pipe.o 00:02:48.124 CC lib/vfio_user/host/vfio_user.o 00:02:48.124 CC lib/util/strerror_tls.o 00:02:48.124 SO libspdk_ioat.so.6.0 00:02:48.124 CC lib/util/string.o 00:02:48.124 CC lib/util/uuid.o 00:02:48.124 CC lib/util/fd_group.o 00:02:48.124 SYMLINK libspdk_ioat.so 00:02:48.124 CC lib/util/xor.o 00:02:48.124 CC lib/util/zipf.o 00:02:48.381 LIB libspdk_vfio_user.a 00:02:48.381 SO libspdk_vfio_user.so.4.0 00:02:48.381 SYMLINK libspdk_vfio_user.so 00:02:48.638 LIB libspdk_util.a 00:02:48.638 SO libspdk_util.so.8.0 00:02:48.895 SYMLINK libspdk_util.so 00:02:49.152 CC lib/conf/conf.o 00:02:49.152 CC lib/vmd/vmd.o 00:02:49.152 CC lib/vmd/led.o 00:02:49.152 CC lib/json/json_util.o 00:02:49.152 CC lib/json/json_parse.o 00:02:49.152 CC lib/rdma/common.o 00:02:49.152 CC lib/rdma/rdma_verbs.o 00:02:49.152 CC lib/env_dpdk/env.o 00:02:49.152 CC lib/idxd/idxd.o 00:02:49.152 LIB libspdk_trace_parser.a 00:02:49.152 SO libspdk_trace_parser.so.4.0 00:02:49.152 CC lib/idxd/idxd_user.o 00:02:49.409 SYMLINK libspdk_trace_parser.so 00:02:49.409 CC lib/idxd/idxd_kernel.o 00:02:49.409 LIB libspdk_conf.a 00:02:49.409 CC lib/json/json_write.o 00:02:49.409 CC lib/env_dpdk/memory.o 00:02:49.409 CC lib/env_dpdk/pci.o 00:02:49.409 SO libspdk_conf.so.5.0 00:02:49.409 LIB libspdk_rdma.a 00:02:49.409 SYMLINK libspdk_conf.so 00:02:49.409 SO libspdk_rdma.so.5.0 00:02:49.409 CC lib/env_dpdk/init.o 00:02:49.409 CC lib/env_dpdk/threads.o 00:02:49.409 CC lib/env_dpdk/pci_ioat.o 00:02:49.666 SYMLINK libspdk_rdma.so 00:02:49.666 CC lib/env_dpdk/pci_virtio.o 00:02:49.666 CC lib/env_dpdk/pci_vmd.o 00:02:49.666 CC lib/env_dpdk/pci_idxd.o 00:02:49.666 LIB libspdk_json.a 00:02:49.666 CC lib/env_dpdk/pci_event.o 00:02:49.666 CC lib/env_dpdk/sigbus_handler.o 00:02:49.666 SO libspdk_json.so.5.1 00:02:49.666 CC lib/env_dpdk/pci_dpdk.o 00:02:49.924 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:49.924 SYMLINK libspdk_json.so 00:02:49.924 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:49.924 LIB libspdk_idxd.a 00:02:49.924 LIB libspdk_vmd.a 00:02:49.924 SO libspdk_idxd.so.11.0 00:02:49.924 SO libspdk_vmd.so.5.0 00:02:49.924 CC lib/jsonrpc/jsonrpc_server.o 00:02:49.924 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:49.924 CC lib/jsonrpc/jsonrpc_client.o 00:02:49.924 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:49.924 SYMLINK libspdk_vmd.so 00:02:49.924 SYMLINK libspdk_idxd.so 00:02:50.182 LIB libspdk_jsonrpc.a 00:02:50.441 SO libspdk_jsonrpc.so.5.1 00:02:50.441 SYMLINK libspdk_jsonrpc.so 00:02:50.699 CC lib/rpc/rpc.o 00:02:50.699 LIB libspdk_rpc.a 00:02:50.957 SO libspdk_rpc.so.5.0 00:02:50.957 SYMLINK libspdk_rpc.so 00:02:50.957 LIB libspdk_env_dpdk.a 00:02:50.957 CC lib/sock/sock.o 00:02:50.957 CC lib/sock/sock_rpc.o 00:02:50.957 CC lib/notify/notify.o 00:02:50.957 CC lib/notify/notify_rpc.o 00:02:50.957 CC lib/trace/trace.o 00:02:50.957 CC lib/trace/trace_rpc.o 00:02:50.957 CC lib/trace/trace_flags.o 00:02:51.216 SO libspdk_env_dpdk.so.13.0 00:02:51.216 LIB libspdk_notify.a 00:02:51.216 SO libspdk_notify.so.5.0 00:02:51.216 SYMLINK libspdk_env_dpdk.so 00:02:51.216 SYMLINK libspdk_notify.so 00:02:51.216 LIB libspdk_trace.a 00:02:51.474 SO libspdk_trace.so.9.0 00:02:51.474 SYMLINK libspdk_trace.so 00:02:51.474 LIB libspdk_sock.a 00:02:51.474 SO libspdk_sock.so.8.0 00:02:51.733 SYMLINK libspdk_sock.so 00:02:51.733 CC lib/thread/thread.o 00:02:51.733 CC lib/thread/iobuf.o 00:02:51.733 CC lib/nvme/nvme_fabric.o 00:02:51.733 CC lib/nvme/nvme_ctrlr.o 00:02:51.733 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:51.733 CC lib/nvme/nvme_ns_cmd.o 00:02:51.733 CC lib/nvme/nvme_ns.o 00:02:51.733 CC lib/nvme/nvme_pcie_common.o 00:02:51.733 CC lib/nvme/nvme_pcie.o 00:02:51.733 CC lib/nvme/nvme_qpair.o 00:02:51.991 CC lib/nvme/nvme.o 00:02:52.556 CC lib/nvme/nvme_quirks.o 00:02:52.557 CC lib/nvme/nvme_transport.o 00:02:52.815 CC lib/nvme/nvme_discovery.o 00:02:52.815 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:52.815 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:52.815 CC lib/nvme/nvme_tcp.o 00:02:53.073 CC lib/nvme/nvme_opal.o 00:02:53.073 CC lib/nvme/nvme_io_msg.o 00:02:53.330 CC lib/nvme/nvme_poll_group.o 00:02:53.331 CC lib/nvme/nvme_zns.o 00:02:53.331 CC lib/nvme/nvme_cuse.o 00:02:53.331 CC lib/nvme/nvme_vfio_user.o 00:02:53.589 CC lib/nvme/nvme_rdma.o 00:02:53.589 LIB libspdk_thread.a 00:02:53.589 SO libspdk_thread.so.9.0 00:02:53.589 SYMLINK libspdk_thread.so 00:02:53.847 CC lib/accel/accel.o 00:02:53.847 CC lib/blob/blobstore.o 00:02:53.847 CC lib/init/json_config.o 00:02:53.847 CC lib/init/subsystem.o 00:02:53.847 CC lib/init/subsystem_rpc.o 00:02:54.105 CC lib/init/rpc.o 00:02:54.105 CC lib/blob/request.o 00:02:54.105 CC lib/blob/zeroes.o 00:02:54.105 CC lib/accel/accel_rpc.o 00:02:54.105 LIB libspdk_init.a 00:02:54.363 SO libspdk_init.so.4.0 00:02:54.363 CC lib/blob/blob_bs_dev.o 00:02:54.363 CC lib/accel/accel_sw.o 00:02:54.363 SYMLINK libspdk_init.so 00:02:54.363 CC lib/virtio/virtio.o 00:02:54.363 CC lib/virtio/virtio_vhost_user.o 00:02:54.363 CC lib/event/app.o 00:02:54.621 CC lib/event/reactor.o 00:02:54.621 CC lib/virtio/virtio_vfio_user.o 00:02:54.621 CC lib/event/log_rpc.o 00:02:54.879 CC lib/event/app_rpc.o 00:02:54.879 CC lib/virtio/virtio_pci.o 00:02:54.879 CC lib/event/scheduler_static.o 00:02:55.137 LIB libspdk_event.a 00:02:55.137 LIB libspdk_accel.a 00:02:55.137 SO libspdk_event.so.12.0 00:02:55.137 SO libspdk_accel.so.14.0 00:02:55.137 LIB libspdk_virtio.a 00:02:55.137 SYMLINK libspdk_event.so 00:02:55.137 LIB libspdk_nvme.a 00:02:55.137 SO libspdk_virtio.so.6.0 00:02:55.137 SYMLINK libspdk_accel.so 00:02:55.396 SYMLINK libspdk_virtio.so 00:02:55.396 CC lib/bdev/bdev.o 00:02:55.396 CC lib/bdev/bdev_rpc.o 00:02:55.396 SO libspdk_nvme.so.12.0 00:02:55.396 CC lib/bdev/bdev_zone.o 00:02:55.396 CC lib/bdev/part.o 00:02:55.396 CC lib/bdev/scsi_nvme.o 00:02:55.654 SYMLINK libspdk_nvme.so 00:02:58.189 LIB libspdk_blob.a 00:02:58.189 SO libspdk_blob.so.10.1 00:02:58.189 SYMLINK libspdk_blob.so 00:02:58.448 CC lib/lvol/lvol.o 00:02:58.448 CC lib/blobfs/blobfs.o 00:02:58.448 CC lib/blobfs/tree.o 00:02:59.016 LIB libspdk_bdev.a 00:02:59.016 SO libspdk_bdev.so.14.0 00:02:59.274 SYMLINK libspdk_bdev.so 00:02:59.274 CC lib/scsi/dev.o 00:02:59.274 CC lib/nbd/nbd.o 00:02:59.274 CC lib/ublk/ublk.o 00:02:59.274 CC lib/scsi/lun.o 00:02:59.274 CC lib/nbd/nbd_rpc.o 00:02:59.274 CC lib/scsi/port.o 00:02:59.274 CC lib/nvmf/ctrlr.o 00:02:59.274 CC lib/ftl/ftl_core.o 00:02:59.542 LIB libspdk_blobfs.a 00:02:59.542 SO libspdk_blobfs.so.9.0 00:02:59.542 LIB libspdk_lvol.a 00:02:59.542 SYMLINK libspdk_blobfs.so 00:02:59.542 CC lib/scsi/scsi.o 00:02:59.542 CC lib/nvmf/ctrlr_discovery.o 00:02:59.542 SO libspdk_lvol.so.9.1 00:02:59.542 CC lib/ublk/ublk_rpc.o 00:02:59.542 SYMLINK libspdk_lvol.so 00:02:59.542 CC lib/nvmf/ctrlr_bdev.o 00:02:59.816 CC lib/scsi/scsi_bdev.o 00:02:59.816 CC lib/scsi/scsi_pr.o 00:02:59.816 CC lib/scsi/scsi_rpc.o 00:02:59.816 CC lib/ftl/ftl_init.o 00:02:59.816 LIB libspdk_nbd.a 00:02:59.816 CC lib/ftl/ftl_layout.o 00:02:59.816 SO libspdk_nbd.so.6.0 00:02:59.816 CC lib/ftl/ftl_debug.o 00:03:00.075 SYMLINK libspdk_nbd.so 00:03:00.075 CC lib/nvmf/subsystem.o 00:03:00.075 CC lib/nvmf/nvmf.o 00:03:00.075 CC lib/nvmf/nvmf_rpc.o 00:03:00.075 CC lib/nvmf/transport.o 00:03:00.075 LIB libspdk_ublk.a 00:03:00.075 CC lib/nvmf/tcp.o 00:03:00.075 SO libspdk_ublk.so.2.0 00:03:00.333 CC lib/ftl/ftl_io.o 00:03:00.333 SYMLINK libspdk_ublk.so 00:03:00.333 CC lib/nvmf/rdma.o 00:03:00.333 CC lib/scsi/task.o 00:03:00.333 CC lib/ftl/ftl_sb.o 00:03:00.592 LIB libspdk_scsi.a 00:03:00.592 CC lib/ftl/ftl_l2p.o 00:03:00.592 SO libspdk_scsi.so.8.0 00:03:00.592 CC lib/ftl/ftl_l2p_flat.o 00:03:00.592 SYMLINK libspdk_scsi.so 00:03:00.850 CC lib/ftl/ftl_nv_cache.o 00:03:00.850 CC lib/iscsi/conn.o 00:03:00.850 CC lib/iscsi/init_grp.o 00:03:00.850 CC lib/iscsi/iscsi.o 00:03:01.108 CC lib/ftl/ftl_band.o 00:03:01.108 CC lib/iscsi/md5.o 00:03:01.108 CC lib/iscsi/param.o 00:03:01.367 CC lib/iscsi/portal_grp.o 00:03:01.367 CC lib/iscsi/tgt_node.o 00:03:01.625 CC lib/ftl/ftl_band_ops.o 00:03:01.625 CC lib/iscsi/iscsi_subsystem.o 00:03:01.625 CC lib/iscsi/iscsi_rpc.o 00:03:01.625 CC lib/iscsi/task.o 00:03:01.883 CC lib/vhost/vhost.o 00:03:01.883 CC lib/vhost/vhost_rpc.o 00:03:01.883 CC lib/ftl/ftl_writer.o 00:03:01.883 CC lib/ftl/ftl_rq.o 00:03:01.883 CC lib/vhost/vhost_scsi.o 00:03:01.883 CC lib/vhost/vhost_blk.o 00:03:02.141 CC lib/vhost/rte_vhost_user.o 00:03:02.141 CC lib/ftl/ftl_reloc.o 00:03:02.141 CC lib/ftl/ftl_l2p_cache.o 00:03:02.141 CC lib/ftl/ftl_p2l.o 00:03:02.708 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.708 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.708 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.708 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.708 LIB libspdk_iscsi.a 00:03:02.708 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.708 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.708 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.708 SO libspdk_iscsi.so.7.0 00:03:02.967 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.967 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:02.967 SYMLINK libspdk_iscsi.so 00:03:02.967 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:02.967 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:02.967 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:02.967 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:02.967 LIB libspdk_nvmf.a 00:03:03.225 CC lib/ftl/utils/ftl_conf.o 00:03:03.225 CC lib/ftl/utils/ftl_md.o 00:03:03.225 CC lib/ftl/utils/ftl_mempool.o 00:03:03.225 SO libspdk_nvmf.so.17.0 00:03:03.225 CC lib/ftl/utils/ftl_bitmap.o 00:03:03.225 CC lib/ftl/utils/ftl_property.o 00:03:03.225 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:03.225 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:03.225 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:03.225 LIB libspdk_vhost.a 00:03:03.225 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.483 SO libspdk_vhost.so.7.1 00:03:03.483 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.483 SYMLINK libspdk_nvmf.so 00:03:03.483 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.483 SYMLINK libspdk_vhost.so 00:03:03.483 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.483 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.483 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.483 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:03.483 CC lib/ftl/base/ftl_base_dev.o 00:03:03.483 CC lib/ftl/base/ftl_base_bdev.o 00:03:03.742 CC lib/ftl/ftl_trace.o 00:03:04.000 LIB libspdk_ftl.a 00:03:04.258 SO libspdk_ftl.so.8.0 00:03:04.516 SYMLINK libspdk_ftl.so 00:03:04.774 CC module/env_dpdk/env_dpdk_rpc.o 00:03:04.774 CC module/accel/error/accel_error.o 00:03:04.774 CC module/accel/ioat/accel_ioat.o 00:03:04.774 CC module/accel/dsa/accel_dsa.o 00:03:04.774 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:04.774 CC module/sock/posix/posix.o 00:03:04.774 CC module/scheduler/gscheduler/gscheduler.o 00:03:04.774 CC module/accel/iaa/accel_iaa.o 00:03:04.774 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:04.774 CC module/blob/bdev/blob_bdev.o 00:03:05.032 LIB libspdk_env_dpdk_rpc.a 00:03:05.032 SO libspdk_env_dpdk_rpc.so.5.0 00:03:05.032 LIB libspdk_scheduler_gscheduler.a 00:03:05.032 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.032 SYMLINK libspdk_env_dpdk_rpc.so 00:03:05.032 SO libspdk_scheduler_gscheduler.so.3.0 00:03:05.032 CC module/accel/iaa/accel_iaa_rpc.o 00:03:05.032 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:05.032 CC module/accel/error/accel_error_rpc.o 00:03:05.032 CC module/accel/ioat/accel_ioat_rpc.o 00:03:05.032 LIB libspdk_scheduler_dynamic.a 00:03:05.032 CC module/accel/dsa/accel_dsa_rpc.o 00:03:05.032 SYMLINK libspdk_scheduler_gscheduler.so 00:03:05.032 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:05.032 SO libspdk_scheduler_dynamic.so.3.0 00:03:05.032 LIB libspdk_blob_bdev.a 00:03:05.032 SYMLINK libspdk_scheduler_dynamic.so 00:03:05.291 LIB libspdk_accel_iaa.a 00:03:05.291 SO libspdk_blob_bdev.so.10.1 00:03:05.291 LIB libspdk_accel_error.a 00:03:05.291 LIB libspdk_accel_ioat.a 00:03:05.291 SO libspdk_accel_iaa.so.2.0 00:03:05.291 LIB libspdk_accel_dsa.a 00:03:05.291 SO libspdk_accel_ioat.so.5.0 00:03:05.291 SO libspdk_accel_error.so.1.0 00:03:05.291 SYMLINK libspdk_blob_bdev.so 00:03:05.291 SO libspdk_accel_dsa.so.4.0 00:03:05.291 SYMLINK libspdk_accel_iaa.so 00:03:05.291 SYMLINK libspdk_accel_error.so 00:03:05.291 SYMLINK libspdk_accel_ioat.so 00:03:05.291 SYMLINK libspdk_accel_dsa.so 00:03:05.550 CC module/bdev/lvol/vbdev_lvol.o 00:03:05.550 CC module/bdev/malloc/bdev_malloc.o 00:03:05.550 CC module/bdev/delay/vbdev_delay.o 00:03:05.550 CC module/bdev/null/bdev_null.o 00:03:05.550 CC module/blobfs/bdev/blobfs_bdev.o 00:03:05.550 CC module/bdev/error/vbdev_error.o 00:03:05.550 CC module/bdev/nvme/bdev_nvme.o 00:03:05.550 CC module/bdev/gpt/gpt.o 00:03:05.550 CC module/bdev/passthru/vbdev_passthru.o 00:03:05.808 CC module/bdev/gpt/vbdev_gpt.o 00:03:05.808 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:05.808 CC module/bdev/null/bdev_null_rpc.o 00:03:05.808 CC module/bdev/error/vbdev_error_rpc.o 00:03:05.808 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:05.808 LIB libspdk_blobfs_bdev.a 00:03:05.808 LIB libspdk_sock_posix.a 00:03:06.066 SO libspdk_blobfs_bdev.so.5.0 00:03:06.066 SO libspdk_sock_posix.so.5.0 00:03:06.066 LIB libspdk_bdev_error.a 00:03:06.066 LIB libspdk_bdev_null.a 00:03:06.066 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:06.067 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:06.067 LIB libspdk_bdev_passthru.a 00:03:06.067 SO libspdk_bdev_error.so.5.0 00:03:06.067 SO libspdk_bdev_null.so.5.0 00:03:06.067 SO libspdk_bdev_passthru.so.5.0 00:03:06.067 LIB libspdk_bdev_gpt.a 00:03:06.067 SYMLINK libspdk_blobfs_bdev.so 00:03:06.067 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:06.067 SYMLINK libspdk_sock_posix.so 00:03:06.067 SYMLINK libspdk_bdev_error.so 00:03:06.067 SO libspdk_bdev_gpt.so.5.0 00:03:06.067 SYMLINK libspdk_bdev_null.so 00:03:06.067 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:06.067 CC module/bdev/nvme/nvme_rpc.o 00:03:06.067 SYMLINK libspdk_bdev_passthru.so 00:03:06.067 CC module/bdev/nvme/bdev_mdns_client.o 00:03:06.067 SYMLINK libspdk_bdev_gpt.so 00:03:06.067 CC module/bdev/nvme/vbdev_opal.o 00:03:06.067 LIB libspdk_bdev_malloc.a 00:03:06.067 LIB libspdk_bdev_delay.a 00:03:06.067 CC module/bdev/raid/bdev_raid.o 00:03:06.067 SO libspdk_bdev_malloc.so.5.0 00:03:06.067 SO libspdk_bdev_delay.so.5.0 00:03:06.067 CC module/bdev/split/vbdev_split.o 00:03:06.324 SYMLINK libspdk_bdev_delay.so 00:03:06.324 SYMLINK libspdk_bdev_malloc.so 00:03:06.324 CC module/bdev/raid/bdev_raid_rpc.o 00:03:06.324 CC module/bdev/split/vbdev_split_rpc.o 00:03:06.324 CC module/bdev/raid/bdev_raid_sb.o 00:03:06.324 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:06.324 LIB libspdk_bdev_lvol.a 00:03:06.324 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:06.324 LIB libspdk_bdev_split.a 00:03:06.324 SO libspdk_bdev_lvol.so.5.0 00:03:06.582 SYMLINK libspdk_bdev_lvol.so 00:03:06.582 SO libspdk_bdev_split.so.5.0 00:03:06.582 CC module/bdev/raid/raid0.o 00:03:06.582 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.582 CC module/bdev/raid/raid1.o 00:03:06.582 SYMLINK libspdk_bdev_split.so 00:03:06.582 CC module/bdev/aio/bdev_aio.o 00:03:06.582 CC module/bdev/xnvme/bdev_xnvme.o 00:03:06.840 CC module/bdev/ftl/bdev_ftl.o 00:03:06.840 CC module/bdev/iscsi/bdev_iscsi.o 00:03:06.840 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:06.840 CC module/bdev/aio/bdev_aio_rpc.o 00:03:06.840 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:07.097 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.097 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:07.097 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:07.097 LIB libspdk_bdev_aio.a 00:03:07.097 CC module/bdev/raid/concat.o 00:03:07.097 LIB libspdk_bdev_ftl.a 00:03:07.097 SO libspdk_bdev_aio.so.5.0 00:03:07.097 SO libspdk_bdev_ftl.so.5.0 00:03:07.097 LIB libspdk_bdev_zone_block.a 00:03:07.097 LIB libspdk_bdev_xnvme.a 00:03:07.097 SO libspdk_bdev_zone_block.so.5.0 00:03:07.097 SYMLINK libspdk_bdev_aio.so 00:03:07.097 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.097 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.097 SO libspdk_bdev_xnvme.so.2.0 00:03:07.097 SYMLINK libspdk_bdev_ftl.so 00:03:07.354 LIB libspdk_bdev_iscsi.a 00:03:07.354 SYMLINK libspdk_bdev_zone_block.so 00:03:07.354 SYMLINK libspdk_bdev_xnvme.so 00:03:07.354 SO libspdk_bdev_iscsi.so.5.0 00:03:07.354 SYMLINK libspdk_bdev_iscsi.so 00:03:07.354 LIB libspdk_bdev_raid.a 00:03:07.354 SO libspdk_bdev_raid.so.5.0 00:03:07.612 SYMLINK libspdk_bdev_raid.so 00:03:07.612 LIB libspdk_bdev_virtio.a 00:03:07.612 SO libspdk_bdev_virtio.so.5.0 00:03:07.871 SYMLINK libspdk_bdev_virtio.so 00:03:08.129 LIB libspdk_bdev_nvme.a 00:03:08.387 SO libspdk_bdev_nvme.so.6.0 00:03:08.387 SYMLINK libspdk_bdev_nvme.so 00:03:08.645 CC module/event/subsystems/scheduler/scheduler.o 00:03:08.645 CC module/event/subsystems/sock/sock.o 00:03:08.645 CC module/event/subsystems/iobuf/iobuf.o 00:03:08.645 CC module/event/subsystems/vmd/vmd.o 00:03:08.645 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:08.645 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:08.645 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:08.904 LIB libspdk_event_sock.a 00:03:08.904 LIB libspdk_event_scheduler.a 00:03:08.904 SO libspdk_event_sock.so.4.0 00:03:08.904 LIB libspdk_event_vhost_blk.a 00:03:08.904 LIB libspdk_event_vmd.a 00:03:08.904 LIB libspdk_event_iobuf.a 00:03:08.904 SO libspdk_event_scheduler.so.3.0 00:03:08.904 SO libspdk_event_vhost_blk.so.2.0 00:03:08.904 SO libspdk_event_iobuf.so.2.0 00:03:08.904 SO libspdk_event_vmd.so.5.0 00:03:08.904 SYMLINK libspdk_event_sock.so 00:03:08.904 SYMLINK libspdk_event_vhost_blk.so 00:03:08.904 SYMLINK libspdk_event_scheduler.so 00:03:08.904 SYMLINK libspdk_event_iobuf.so 00:03:08.904 SYMLINK libspdk_event_vmd.so 00:03:09.162 CC module/event/subsystems/accel/accel.o 00:03:09.420 LIB libspdk_event_accel.a 00:03:09.420 SO libspdk_event_accel.so.5.0 00:03:09.420 SYMLINK libspdk_event_accel.so 00:03:09.678 CC module/event/subsystems/bdev/bdev.o 00:03:09.936 LIB libspdk_event_bdev.a 00:03:09.936 SO libspdk_event_bdev.so.5.0 00:03:09.936 SYMLINK libspdk_event_bdev.so 00:03:10.193 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:10.193 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:10.193 CC module/event/subsystems/ublk/ublk.o 00:03:10.193 CC module/event/subsystems/scsi/scsi.o 00:03:10.193 CC module/event/subsystems/nbd/nbd.o 00:03:10.193 LIB libspdk_event_nbd.a 00:03:10.193 LIB libspdk_event_ublk.a 00:03:10.193 LIB libspdk_event_scsi.a 00:03:10.193 SO libspdk_event_nbd.so.5.0 00:03:10.193 SO libspdk_event_ublk.so.2.0 00:03:10.193 SO libspdk_event_scsi.so.5.0 00:03:10.450 SYMLINK libspdk_event_nbd.so 00:03:10.450 SYMLINK libspdk_event_ublk.so 00:03:10.450 SYMLINK libspdk_event_scsi.so 00:03:10.450 LIB libspdk_event_nvmf.a 00:03:10.450 SO libspdk_event_nvmf.so.5.0 00:03:10.450 SYMLINK libspdk_event_nvmf.so 00:03:10.450 CC module/event/subsystems/iscsi/iscsi.o 00:03:10.450 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:10.708 LIB libspdk_event_vhost_scsi.a 00:03:10.708 LIB libspdk_event_iscsi.a 00:03:10.708 SO libspdk_event_vhost_scsi.so.2.0 00:03:10.708 SO libspdk_event_iscsi.so.5.0 00:03:10.708 SYMLINK libspdk_event_vhost_scsi.so 00:03:10.965 SYMLINK libspdk_event_iscsi.so 00:03:10.965 SO libspdk.so.5.0 00:03:10.965 SYMLINK libspdk.so 00:03:11.222 CXX app/trace/trace.o 00:03:11.222 TEST_HEADER include/spdk/accel.h 00:03:11.222 TEST_HEADER include/spdk/accel_module.h 00:03:11.222 TEST_HEADER include/spdk/assert.h 00:03:11.222 TEST_HEADER include/spdk/barrier.h 00:03:11.222 TEST_HEADER include/spdk/base64.h 00:03:11.222 TEST_HEADER include/spdk/bdev.h 00:03:11.222 TEST_HEADER include/spdk/bdev_module.h 00:03:11.222 TEST_HEADER include/spdk/bdev_zone.h 00:03:11.222 TEST_HEADER include/spdk/bit_array.h 00:03:11.222 TEST_HEADER include/spdk/bit_pool.h 00:03:11.222 TEST_HEADER include/spdk/blob_bdev.h 00:03:11.222 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:11.222 TEST_HEADER include/spdk/blobfs.h 00:03:11.222 TEST_HEADER include/spdk/blob.h 00:03:11.222 TEST_HEADER include/spdk/conf.h 00:03:11.222 TEST_HEADER include/spdk/config.h 00:03:11.222 TEST_HEADER include/spdk/cpuset.h 00:03:11.222 TEST_HEADER include/spdk/crc16.h 00:03:11.222 TEST_HEADER include/spdk/crc32.h 00:03:11.222 TEST_HEADER include/spdk/crc64.h 00:03:11.222 TEST_HEADER include/spdk/dif.h 00:03:11.222 TEST_HEADER include/spdk/dma.h 00:03:11.222 TEST_HEADER include/spdk/endian.h 00:03:11.222 TEST_HEADER include/spdk/env_dpdk.h 00:03:11.222 TEST_HEADER include/spdk/env.h 00:03:11.222 CC examples/accel/perf/accel_perf.o 00:03:11.222 TEST_HEADER include/spdk/event.h 00:03:11.222 TEST_HEADER include/spdk/fd_group.h 00:03:11.222 TEST_HEADER include/spdk/fd.h 00:03:11.222 TEST_HEADER include/spdk/file.h 00:03:11.222 TEST_HEADER include/spdk/ftl.h 00:03:11.222 TEST_HEADER include/spdk/gpt_spec.h 00:03:11.222 TEST_HEADER include/spdk/hexlify.h 00:03:11.222 TEST_HEADER include/spdk/histogram_data.h 00:03:11.222 TEST_HEADER include/spdk/idxd.h 00:03:11.222 TEST_HEADER include/spdk/idxd_spec.h 00:03:11.222 TEST_HEADER include/spdk/init.h 00:03:11.222 TEST_HEADER include/spdk/ioat.h 00:03:11.222 CC test/app/bdev_svc/bdev_svc.o 00:03:11.222 TEST_HEADER include/spdk/ioat_spec.h 00:03:11.222 CC test/accel/dif/dif.o 00:03:11.222 CC test/blobfs/mkfs/mkfs.o 00:03:11.222 TEST_HEADER include/spdk/iscsi_spec.h 00:03:11.222 TEST_HEADER include/spdk/json.h 00:03:11.222 TEST_HEADER include/spdk/jsonrpc.h 00:03:11.222 CC test/bdev/bdevio/bdevio.o 00:03:11.222 TEST_HEADER include/spdk/likely.h 00:03:11.222 TEST_HEADER include/spdk/log.h 00:03:11.222 TEST_HEADER include/spdk/lvol.h 00:03:11.222 TEST_HEADER include/spdk/memory.h 00:03:11.222 CC examples/bdev/hello_world/hello_bdev.o 00:03:11.222 TEST_HEADER include/spdk/mmio.h 00:03:11.222 CC test/dma/test_dma/test_dma.o 00:03:11.222 TEST_HEADER include/spdk/nbd.h 00:03:11.222 CC examples/blob/hello_world/hello_blob.o 00:03:11.222 TEST_HEADER include/spdk/notify.h 00:03:11.222 TEST_HEADER include/spdk/nvme.h 00:03:11.222 TEST_HEADER include/spdk/nvme_intel.h 00:03:11.222 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:11.222 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:11.222 TEST_HEADER include/spdk/nvme_spec.h 00:03:11.222 TEST_HEADER include/spdk/nvme_zns.h 00:03:11.222 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:11.222 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:11.222 TEST_HEADER include/spdk/nvmf.h 00:03:11.222 TEST_HEADER include/spdk/nvmf_spec.h 00:03:11.222 TEST_HEADER include/spdk/nvmf_transport.h 00:03:11.222 TEST_HEADER include/spdk/opal.h 00:03:11.222 TEST_HEADER include/spdk/opal_spec.h 00:03:11.222 TEST_HEADER include/spdk/pci_ids.h 00:03:11.222 TEST_HEADER include/spdk/pipe.h 00:03:11.222 TEST_HEADER include/spdk/queue.h 00:03:11.222 TEST_HEADER include/spdk/reduce.h 00:03:11.480 TEST_HEADER include/spdk/rpc.h 00:03:11.480 TEST_HEADER include/spdk/scheduler.h 00:03:11.480 TEST_HEADER include/spdk/scsi.h 00:03:11.480 TEST_HEADER include/spdk/scsi_spec.h 00:03:11.480 TEST_HEADER include/spdk/sock.h 00:03:11.480 TEST_HEADER include/spdk/stdinc.h 00:03:11.480 TEST_HEADER include/spdk/string.h 00:03:11.480 TEST_HEADER include/spdk/thread.h 00:03:11.480 TEST_HEADER include/spdk/trace.h 00:03:11.480 TEST_HEADER include/spdk/trace_parser.h 00:03:11.480 TEST_HEADER include/spdk/tree.h 00:03:11.480 TEST_HEADER include/spdk/ublk.h 00:03:11.480 TEST_HEADER include/spdk/util.h 00:03:11.480 TEST_HEADER include/spdk/uuid.h 00:03:11.480 TEST_HEADER include/spdk/version.h 00:03:11.480 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:11.480 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:11.480 TEST_HEADER include/spdk/vhost.h 00:03:11.480 TEST_HEADER include/spdk/vmd.h 00:03:11.480 TEST_HEADER include/spdk/xor.h 00:03:11.480 TEST_HEADER include/spdk/zipf.h 00:03:11.480 CXX test/cpp_headers/accel.o 00:03:11.480 LINK bdev_svc 00:03:11.480 LINK mkfs 00:03:11.480 LINK hello_blob 00:03:11.737 CXX test/cpp_headers/accel_module.o 00:03:11.737 LINK hello_bdev 00:03:11.737 LINK spdk_trace 00:03:11.737 LINK test_dma 00:03:11.737 LINK bdevio 00:03:11.737 LINK dif 00:03:11.737 LINK accel_perf 00:03:11.738 CXX test/cpp_headers/assert.o 00:03:11.738 CC test/app/histogram_perf/histogram_perf.o 00:03:11.738 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:11.995 CC app/trace_record/trace_record.o 00:03:11.995 CC examples/blob/cli/blobcli.o 00:03:11.995 CC examples/bdev/bdevperf/bdevperf.o 00:03:11.995 CXX test/cpp_headers/barrier.o 00:03:11.995 LINK histogram_perf 00:03:11.995 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:11.995 CC test/app/jsoncat/jsoncat.o 00:03:11.995 CC test/app/stub/stub.o 00:03:12.293 CXX test/cpp_headers/base64.o 00:03:12.293 LINK spdk_trace_record 00:03:12.293 CC test/env/mem_callbacks/mem_callbacks.o 00:03:12.293 LINK jsoncat 00:03:12.293 LINK stub 00:03:12.293 CC test/event/event_perf/event_perf.o 00:03:12.293 CXX test/cpp_headers/bdev.o 00:03:12.293 LINK nvme_fuzz 00:03:12.573 CXX test/cpp_headers/bdev_module.o 00:03:12.573 CC app/nvmf_tgt/nvmf_main.o 00:03:12.573 CXX test/cpp_headers/bdev_zone.o 00:03:12.573 LINK event_perf 00:03:12.573 LINK blobcli 00:03:12.573 LINK nvmf_tgt 00:03:12.573 CXX test/cpp_headers/bit_array.o 00:03:12.573 CC test/nvme/aer/aer.o 00:03:12.573 CC test/nvme/reset/reset.o 00:03:12.830 CC test/event/reactor/reactor.o 00:03:12.830 CC test/lvol/esnap/esnap.o 00:03:12.830 CXX test/cpp_headers/bit_pool.o 00:03:12.830 LINK mem_callbacks 00:03:12.830 LINK reactor 00:03:12.830 CC test/nvme/sgl/sgl.o 00:03:12.830 LINK bdevperf 00:03:12.830 CXX test/cpp_headers/blob_bdev.o 00:03:13.088 CC app/iscsi_tgt/iscsi_tgt.o 00:03:13.088 LINK reset 00:03:13.088 LINK aer 00:03:13.088 CC test/env/vtophys/vtophys.o 00:03:13.088 CC test/event/reactor_perf/reactor_perf.o 00:03:13.088 CXX test/cpp_headers/blobfs_bdev.o 00:03:13.346 LINK vtophys 00:03:13.346 LINK iscsi_tgt 00:03:13.346 LINK sgl 00:03:13.346 LINK reactor_perf 00:03:13.346 CC test/rpc_client/rpc_client_test.o 00:03:13.346 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:13.346 CC examples/ioat/perf/perf.o 00:03:13.346 CXX test/cpp_headers/blobfs.o 00:03:13.346 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:13.346 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:13.604 CC test/nvme/e2edp/nvme_dp.o 00:03:13.604 CXX test/cpp_headers/blob.o 00:03:13.604 CC test/event/app_repeat/app_repeat.o 00:03:13.604 LINK rpc_client_test 00:03:13.604 LINK ioat_perf 00:03:13.604 CC app/spdk_tgt/spdk_tgt.o 00:03:13.604 LINK env_dpdk_post_init 00:03:13.604 CXX test/cpp_headers/conf.o 00:03:13.604 LINK app_repeat 00:03:13.604 CXX test/cpp_headers/config.o 00:03:13.863 CC examples/ioat/verify/verify.o 00:03:13.863 LINK spdk_tgt 00:03:13.863 LINK nvme_dp 00:03:13.863 CC test/thread/poller_perf/poller_perf.o 00:03:13.863 CXX test/cpp_headers/cpuset.o 00:03:13.863 CC test/env/memory/memory_ut.o 00:03:13.863 LINK vhost_fuzz 00:03:13.863 CC test/event/scheduler/scheduler.o 00:03:14.122 LINK poller_perf 00:03:14.122 CXX test/cpp_headers/crc16.o 00:03:14.122 LINK verify 00:03:14.122 CC test/nvme/overhead/overhead.o 00:03:14.122 CXX test/cpp_headers/crc32.o 00:03:14.122 CC app/spdk_lspci/spdk_lspci.o 00:03:14.122 CXX test/cpp_headers/crc64.o 00:03:14.122 LINK scheduler 00:03:14.381 LINK spdk_lspci 00:03:14.381 LINK iscsi_fuzz 00:03:14.381 CC test/nvme/err_injection/err_injection.o 00:03:14.381 CC test/nvme/startup/startup.o 00:03:14.381 CXX test/cpp_headers/dif.o 00:03:14.381 LINK overhead 00:03:14.381 CC examples/nvme/hello_world/hello_world.o 00:03:14.381 LINK err_injection 00:03:14.381 CC app/spdk_nvme_perf/perf.o 00:03:14.639 CXX test/cpp_headers/dma.o 00:03:14.639 LINK startup 00:03:14.639 CC test/nvme/reserve/reserve.o 00:03:14.639 CXX test/cpp_headers/endian.o 00:03:14.639 CXX test/cpp_headers/env_dpdk.o 00:03:14.639 LINK hello_world 00:03:14.639 CXX test/cpp_headers/env.o 00:03:14.639 CC test/nvme/simple_copy/simple_copy.o 00:03:14.897 CC test/nvme/connect_stress/connect_stress.o 00:03:14.897 LINK reserve 00:03:14.897 CC test/nvme/boot_partition/boot_partition.o 00:03:14.897 CC examples/nvme/reconnect/reconnect.o 00:03:14.898 CXX test/cpp_headers/event.o 00:03:14.898 CC test/nvme/compliance/nvme_compliance.o 00:03:14.898 LINK memory_ut 00:03:14.898 LINK connect_stress 00:03:14.898 LINK boot_partition 00:03:14.898 CC test/nvme/fused_ordering/fused_ordering.o 00:03:15.155 CXX test/cpp_headers/fd_group.o 00:03:15.155 LINK simple_copy 00:03:15.155 CC test/env/pci/pci_ut.o 00:03:15.155 CXX test/cpp_headers/fd.o 00:03:15.155 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:15.155 LINK reconnect 00:03:15.155 CC test/nvme/fdp/fdp.o 00:03:15.155 LINK fused_ordering 00:03:15.413 CC test/nvme/cuse/cuse.o 00:03:15.413 LINK nvme_compliance 00:03:15.413 CXX test/cpp_headers/file.o 00:03:15.413 LINK doorbell_aers 00:03:15.413 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:15.413 LINK spdk_nvme_perf 00:03:15.671 CXX test/cpp_headers/ftl.o 00:03:15.671 CC app/spdk_nvme_identify/identify.o 00:03:15.671 CC examples/sock/hello_world/hello_sock.o 00:03:15.671 LINK fdp 00:03:15.671 LINK pci_ut 00:03:15.671 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.671 CXX test/cpp_headers/gpt_spec.o 00:03:15.671 CC examples/vmd/led/led.o 00:03:15.929 CC examples/nvme/arbitration/arbitration.o 00:03:15.929 LINK hello_sock 00:03:15.929 LINK lsvmd 00:03:15.929 CXX test/cpp_headers/hexlify.o 00:03:15.929 LINK led 00:03:15.929 CXX test/cpp_headers/histogram_data.o 00:03:16.187 CXX test/cpp_headers/idxd.o 00:03:16.187 LINK nvme_manage 00:03:16.187 CC examples/nvme/hotplug/hotplug.o 00:03:16.187 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:16.187 CXX test/cpp_headers/idxd_spec.o 00:03:16.187 LINK arbitration 00:03:16.187 CC examples/util/zipf/zipf.o 00:03:16.187 CC examples/nvmf/nvmf/nvmf.o 00:03:16.445 LINK cmb_copy 00:03:16.445 CC examples/nvme/abort/abort.o 00:03:16.445 LINK hotplug 00:03:16.445 CXX test/cpp_headers/init.o 00:03:16.445 LINK zipf 00:03:16.445 LINK cuse 00:03:16.445 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.703 CXX test/cpp_headers/ioat.o 00:03:16.703 LINK spdk_nvme_identify 00:03:16.703 CXX test/cpp_headers/ioat_spec.o 00:03:16.703 CXX test/cpp_headers/iscsi_spec.o 00:03:16.703 LINK nvmf 00:03:16.703 LINK pmr_persistence 00:03:16.703 CC examples/thread/thread/thread_ex.o 00:03:16.703 CXX test/cpp_headers/json.o 00:03:16.703 CC app/spdk_nvme_discover/discovery_aer.o 00:03:16.961 CXX test/cpp_headers/jsonrpc.o 00:03:16.961 LINK abort 00:03:16.961 CC app/spdk_top/spdk_top.o 00:03:16.961 CXX test/cpp_headers/likely.o 00:03:16.961 CXX test/cpp_headers/log.o 00:03:16.961 CC examples/idxd/perf/perf.o 00:03:16.961 LINK thread 00:03:16.961 LINK spdk_nvme_discover 00:03:16.961 CC app/vhost/vhost.o 00:03:17.218 CXX test/cpp_headers/lvol.o 00:03:17.218 CC app/spdk_dd/spdk_dd.o 00:03:17.218 CC app/fio/nvme/fio_plugin.o 00:03:17.218 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:17.218 CXX test/cpp_headers/memory.o 00:03:17.218 LINK vhost 00:03:17.218 CXX test/cpp_headers/mmio.o 00:03:17.218 CC app/fio/bdev/fio_plugin.o 00:03:17.476 LINK idxd_perf 00:03:17.476 LINK interrupt_tgt 00:03:17.476 CXX test/cpp_headers/nbd.o 00:03:17.476 CXX test/cpp_headers/notify.o 00:03:17.476 CXX test/cpp_headers/nvme.o 00:03:17.476 CXX test/cpp_headers/nvme_intel.o 00:03:17.476 CXX test/cpp_headers/nvme_ocssd.o 00:03:17.476 LINK spdk_dd 00:03:17.476 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:17.734 CXX test/cpp_headers/nvme_spec.o 00:03:17.734 CXX test/cpp_headers/nvme_zns.o 00:03:17.734 CXX test/cpp_headers/nvmf_cmd.o 00:03:17.734 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:17.734 CXX test/cpp_headers/nvmf.o 00:03:17.734 CXX test/cpp_headers/nvmf_spec.o 00:03:17.734 CXX test/cpp_headers/nvmf_transport.o 00:03:17.734 LINK spdk_nvme 00:03:17.734 CXX test/cpp_headers/opal.o 00:03:17.992 CXX test/cpp_headers/opal_spec.o 00:03:17.992 LINK spdk_bdev 00:03:17.992 CXX test/cpp_headers/pci_ids.o 00:03:17.992 CXX test/cpp_headers/pipe.o 00:03:17.992 CXX test/cpp_headers/queue.o 00:03:17.992 CXX test/cpp_headers/reduce.o 00:03:17.992 LINK spdk_top 00:03:17.992 CXX test/cpp_headers/rpc.o 00:03:17.992 CXX test/cpp_headers/scheduler.o 00:03:17.992 CXX test/cpp_headers/scsi.o 00:03:17.992 CXX test/cpp_headers/scsi_spec.o 00:03:17.992 CXX test/cpp_headers/sock.o 00:03:17.992 CXX test/cpp_headers/stdinc.o 00:03:17.992 CXX test/cpp_headers/string.o 00:03:17.992 CXX test/cpp_headers/thread.o 00:03:18.251 CXX test/cpp_headers/trace.o 00:03:18.251 CXX test/cpp_headers/trace_parser.o 00:03:18.251 CXX test/cpp_headers/tree.o 00:03:18.251 CXX test/cpp_headers/ublk.o 00:03:18.251 CXX test/cpp_headers/util.o 00:03:18.251 CXX test/cpp_headers/uuid.o 00:03:18.251 CXX test/cpp_headers/version.o 00:03:18.251 CXX test/cpp_headers/vfio_user_pci.o 00:03:18.251 CXX test/cpp_headers/vfio_user_spec.o 00:03:18.251 CXX test/cpp_headers/vhost.o 00:03:18.251 CXX test/cpp_headers/vmd.o 00:03:18.251 CXX test/cpp_headers/xor.o 00:03:18.251 CXX test/cpp_headers/zipf.o 00:03:18.819 LINK esnap 00:03:19.386 00:03:19.386 real 1m9.753s 00:03:19.386 user 7m13.339s 00:03:19.386 sys 1m24.463s 00:03:19.386 20:54:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:19.386 20:54:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:19.386 ************************************ 00:03:19.386 END TEST make 00:03:19.386 ************************************ 00:03:19.386 20:54:33 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:19.386 20:54:33 -- nvmf/common.sh@7 -- # uname -s 00:03:19.386 20:54:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:19.386 20:54:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:19.386 20:54:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:19.386 20:54:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:19.386 20:54:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:19.386 20:54:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:19.386 20:54:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:19.386 20:54:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:19.386 20:54:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:19.386 20:54:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:19.644 20:54:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ac8e35c3-2976-4d36-a627-b0337040b223 00:03:19.644 20:54:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=ac8e35c3-2976-4d36-a627-b0337040b223 00:03:19.644 20:54:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:19.644 20:54:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:19.644 20:54:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:19.644 20:54:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:19.644 20:54:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:19.644 20:54:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:19.644 20:54:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:19.644 20:54:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.644 20:54:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.644 20:54:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.644 20:54:33 -- paths/export.sh@5 -- # export PATH 00:03:19.644 20:54:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.644 20:54:33 -- nvmf/common.sh@46 -- # : 0 00:03:19.644 20:54:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:19.644 20:54:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:19.644 20:54:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:19.644 20:54:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:19.644 20:54:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:19.644 20:54:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:19.644 20:54:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:19.644 20:54:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:19.644 20:54:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:19.644 20:54:33 -- spdk/autotest.sh@32 -- # uname -s 00:03:19.644 20:54:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:19.644 20:54:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:19.644 20:54:33 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:19.644 20:54:33 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:19.644 20:54:33 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:19.644 20:54:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:19.644 20:54:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:19.644 20:54:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:19.644 20:54:33 -- spdk/autotest.sh@48 -- # udevadm_pid=48318 00:03:19.644 20:54:33 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:19.644 20:54:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:19.644 20:54:33 -- spdk/autotest.sh@54 -- # echo 48327 00:03:19.644 20:54:33 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:19.644 20:54:33 -- spdk/autotest.sh@56 -- # echo 48331 00:03:19.644 20:54:33 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:19.644 20:54:33 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:19.644 20:54:33 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:19.644 20:54:33 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:19.644 20:54:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:19.644 20:54:33 -- common/autotest_common.sh@10 -- # set +x 00:03:19.644 20:54:33 -- spdk/autotest.sh@70 -- # create_test_list 00:03:19.644 20:54:33 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:19.644 20:54:33 -- common/autotest_common.sh@10 -- # set +x 00:03:19.644 20:54:33 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:19.644 20:54:33 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:19.644 20:54:33 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:19.644 20:54:33 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:19.644 20:54:33 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:19.644 20:54:33 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:19.644 20:54:33 -- common/autotest_common.sh@1440 -- # uname 00:03:19.644 20:54:33 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:19.644 20:54:33 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:19.644 20:54:33 -- common/autotest_common.sh@1460 -- # uname 00:03:19.644 20:54:33 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:19.644 20:54:33 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:19.644 20:54:33 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:19.644 20:54:33 -- spdk/autotest.sh@83 -- # hash lcov 00:03:19.644 20:54:33 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:19.644 20:54:33 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:19.644 --rc lcov_branch_coverage=1 00:03:19.644 --rc lcov_function_coverage=1 00:03:19.644 --rc genhtml_branch_coverage=1 00:03:19.644 --rc genhtml_function_coverage=1 00:03:19.644 --rc genhtml_legend=1 00:03:19.644 --rc geninfo_all_blocks=1 00:03:19.644 ' 00:03:19.644 20:54:33 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:19.644 --rc lcov_branch_coverage=1 00:03:19.644 --rc lcov_function_coverage=1 00:03:19.644 --rc genhtml_branch_coverage=1 00:03:19.644 --rc genhtml_function_coverage=1 00:03:19.645 --rc genhtml_legend=1 00:03:19.645 --rc geninfo_all_blocks=1 00:03:19.645 ' 00:03:19.645 20:54:33 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:19.645 --rc lcov_branch_coverage=1 00:03:19.645 --rc lcov_function_coverage=1 00:03:19.645 --rc genhtml_branch_coverage=1 00:03:19.645 --rc genhtml_function_coverage=1 00:03:19.645 --rc genhtml_legend=1 00:03:19.645 --rc geninfo_all_blocks=1 00:03:19.645 --no-external' 00:03:19.645 20:54:33 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:19.645 --rc lcov_branch_coverage=1 00:03:19.645 --rc lcov_function_coverage=1 00:03:19.645 --rc genhtml_branch_coverage=1 00:03:19.645 --rc genhtml_function_coverage=1 00:03:19.645 --rc genhtml_legend=1 00:03:19.645 --rc geninfo_all_blocks=1 00:03:19.645 --no-external' 00:03:19.645 20:54:33 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:19.645 lcov: LCOV version 1.14 00:03:19.645 20:54:33 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:27.750 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:27.750 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:27.750 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:27.750 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:27.750 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:27.750 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:45.832 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:45.832 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:45.833 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:45.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:46.770 20:55:00 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:46.770 20:55:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:46.770 20:55:00 -- common/autotest_common.sh@10 -- # set +x 00:03:46.770 20:55:00 -- spdk/autotest.sh@102 -- # rm -f 00:03:46.770 20:55:00 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.707 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:47.707 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:47.707 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:47.707 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:47.707 20:55:01 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:47.707 20:55:01 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:47.707 20:55:01 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:47.707 20:55:01 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:47.707 20:55:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:47.707 20:55:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:47.707 20:55:01 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:47.707 20:55:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.707 20:55:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:47.707 20:55:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:47.707 20:55:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:47.707 20:55:01 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:47.707 20:55:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:47.707 20:55:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:47.707 20:55:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:47.707 20:55:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:03:47.707 20:55:01 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:03:47.707 20:55:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:47.707 20:55:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:47.707 20:55:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:47.707 20:55:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n2 00:03:47.707 20:55:01 -- common/autotest_common.sh@1647 -- # local device=nvme2n2 00:03:47.707 20:55:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:47.707 20:55:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:47.707 20:55:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:47.707 20:55:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n3 00:03:47.707 20:55:01 -- common/autotest_common.sh@1647 -- # local device=nvme2n3 00:03:47.707 20:55:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:47.707 20:55:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:47.707 20:55:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:47.708 20:55:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3c3n1 00:03:47.708 20:55:01 -- common/autotest_common.sh@1647 -- # local device=nvme3c3n1 00:03:47.708 20:55:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:47.708 20:55:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:47.708 20:55:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:47.708 20:55:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:03:47.708 20:55:01 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:03:47.708 20:55:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:47.708 20:55:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:47.708 20:55:01 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:47.708 20:55:01 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:03:47.708 20:55:01 -- spdk/autotest.sh@121 -- # grep -v p 00:03:47.708 20:55:01 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:47.708 20:55:01 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:47.708 20:55:01 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:47.708 20:55:01 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:47.708 20:55:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:47.708 No valid GPT data, bailing 00:03:47.708 20:55:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.708 20:55:01 -- scripts/common.sh@393 -- # pt= 00:03:47.708 20:55:01 -- scripts/common.sh@394 -- # return 1 00:03:47.708 20:55:01 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:47.708 1+0 records in 00:03:47.708 1+0 records out 00:03:47.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013417 s, 78.2 MB/s 00:03:47.708 20:55:01 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:47.708 20:55:01 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:47.708 20:55:01 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:03:47.708 20:55:01 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:47.708 20:55:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:47.708 No valid GPT data, bailing 00:03:47.708 20:55:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:47.967 20:55:01 -- scripts/common.sh@393 -- # pt= 00:03:47.967 20:55:01 -- scripts/common.sh@394 -- # return 1 00:03:47.967 20:55:01 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:47.967 1+0 records in 00:03:47.967 1+0 records out 00:03:47.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00438135 s, 239 MB/s 00:03:47.967 20:55:01 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:47.967 20:55:01 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:47.967 20:55:01 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n1 00:03:47.967 20:55:01 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:47.967 20:55:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:47.967 No valid GPT data, bailing 00:03:47.967 20:55:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:47.967 20:55:01 -- scripts/common.sh@393 -- # pt= 00:03:47.967 20:55:01 -- scripts/common.sh@394 -- # return 1 00:03:47.967 20:55:01 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:47.967 1+0 records in 00:03:47.967 1+0 records out 00:03:47.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443907 s, 236 MB/s 00:03:47.967 20:55:01 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:47.967 20:55:01 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:47.967 20:55:01 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n2 00:03:47.967 20:55:01 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:03:47.967 20:55:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:47.967 No valid GPT data, bailing 00:03:47.967 20:55:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:47.967 20:55:01 -- scripts/common.sh@393 -- # pt= 00:03:47.967 20:55:01 -- scripts/common.sh@394 -- # return 1 00:03:47.967 20:55:01 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:47.967 1+0 records in 00:03:47.967 1+0 records out 00:03:47.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444439 s, 236 MB/s 00:03:47.967 20:55:01 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:47.967 20:55:01 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:47.967 20:55:01 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n3 00:03:47.967 20:55:01 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:03:47.967 20:55:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:47.967 No valid GPT data, bailing 00:03:47.967 20:55:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:47.967 20:55:01 -- scripts/common.sh@393 -- # pt= 00:03:47.967 20:55:01 -- scripts/common.sh@394 -- # return 1 00:03:47.967 20:55:01 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:47.967 1+0 records in 00:03:47.967 1+0 records out 00:03:47.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422627 s, 248 MB/s 00:03:47.967 20:55:01 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:47.967 20:55:01 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:47.967 20:55:01 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme3n1 00:03:47.967 20:55:01 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:47.967 20:55:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:48.226 No valid GPT data, bailing 00:03:48.226 20:55:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:48.226 20:55:01 -- scripts/common.sh@393 -- # pt= 00:03:48.226 20:55:01 -- scripts/common.sh@394 -- # return 1 00:03:48.226 20:55:01 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:48.226 1+0 records in 00:03:48.226 1+0 records out 00:03:48.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00410034 s, 256 MB/s 00:03:48.226 20:55:01 -- spdk/autotest.sh@129 -- # sync 00:03:48.485 20:55:02 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:48.485 20:55:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:48.485 20:55:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:50.388 20:55:03 -- spdk/autotest.sh@135 -- # uname -s 00:03:50.388 20:55:03 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:50.388 20:55:03 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:50.388 20:55:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:50.388 20:55:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:50.388 20:55:03 -- common/autotest_common.sh@10 -- # set +x 00:03:50.388 ************************************ 00:03:50.388 START TEST setup.sh 00:03:50.388 ************************************ 00:03:50.388 20:55:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:50.388 * Looking for test storage... 00:03:50.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:50.388 20:55:04 -- setup/test-setup.sh@10 -- # uname -s 00:03:50.388 20:55:04 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:50.388 20:55:04 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:50.388 20:55:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:50.388 20:55:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:50.388 20:55:04 -- common/autotest_common.sh@10 -- # set +x 00:03:50.388 ************************************ 00:03:50.388 START TEST acl 00:03:50.388 ************************************ 00:03:50.388 20:55:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:50.388 * Looking for test storage... 00:03:50.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:50.388 20:55:04 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:50.388 20:55:04 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:50.388 20:55:04 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:50.388 20:55:04 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:50.388 20:55:04 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:50.388 20:55:04 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:50.388 20:55:04 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:50.388 20:55:04 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:50.388 20:55:04 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:50.388 20:55:04 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:50.388 20:55:04 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:50.388 20:55:04 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:03:50.388 20:55:04 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:03:50.388 20:55:04 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:50.388 20:55:04 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n2 00:03:50.388 20:55:04 -- common/autotest_common.sh@1647 -- # local device=nvme2n2 00:03:50.388 20:55:04 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:50.388 20:55:04 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n3 00:03:50.388 20:55:04 -- common/autotest_common.sh@1647 -- # local device=nvme2n3 00:03:50.388 20:55:04 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:50.388 20:55:04 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3c3n1 00:03:50.388 20:55:04 -- common/autotest_common.sh@1647 -- # local device=nvme3c3n1 00:03:50.388 20:55:04 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:50.388 20:55:04 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:03:50.388 20:55:04 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:03:50.388 20:55:04 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:50.388 20:55:04 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:50.388 20:55:04 -- setup/acl.sh@12 -- # devs=() 00:03:50.388 20:55:04 -- setup/acl.sh@12 -- # declare -a devs 00:03:50.388 20:55:04 -- setup/acl.sh@13 -- # drivers=() 00:03:50.388 20:55:04 -- setup/acl.sh@13 -- # declare -A drivers 00:03:50.388 20:55:04 -- setup/acl.sh@51 -- # setup reset 00:03:50.388 20:55:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.388 20:55:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.323 20:55:05 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:51.323 20:55:05 -- setup/acl.sh@16 -- # local dev driver 00:03:51.323 20:55:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.323 20:55:05 -- setup/acl.sh@15 -- # setup output status 00:03:51.323 20:55:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.323 20:55:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:51.582 Hugepages 00:03:51.582 node hugesize free / total 00:03:51.582 20:55:05 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:51.582 20:55:05 -- setup/acl.sh@19 -- # continue 00:03:51.582 20:55:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.582 00:03:51.582 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.582 20:55:05 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:51.582 20:55:05 -- setup/acl.sh@19 -- # continue 00:03:51.582 20:55:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.582 20:55:05 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:51.582 20:55:05 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:51.582 20:55:05 -- setup/acl.sh@20 -- # continue 00:03:51.582 20:55:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.842 20:55:05 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:51.842 20:55:05 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:51.842 20:55:05 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:51.842 20:55:05 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:51.842 20:55:05 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:51.842 20:55:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.842 20:55:05 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:51.842 20:55:05 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:51.842 20:55:05 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:51.842 20:55:05 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:51.842 20:55:05 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:51.842 20:55:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.842 20:55:05 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:51.842 20:55:05 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:51.842 20:55:05 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:51.842 20:55:05 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:51.842 20:55:05 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:51.842 20:55:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.842 20:55:05 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:51.842 20:55:05 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:51.842 20:55:05 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:51.842 20:55:05 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:51.842 20:55:05 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:51.842 20:55:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.842 20:55:05 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:51.842 20:55:05 -- setup/acl.sh@54 -- # run_test denied denied 00:03:51.842 20:55:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.842 20:55:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.842 20:55:05 -- common/autotest_common.sh@10 -- # set +x 00:03:52.100 ************************************ 00:03:52.100 START TEST denied 00:03:52.100 ************************************ 00:03:52.100 20:55:05 -- common/autotest_common.sh@1104 -- # denied 00:03:52.100 20:55:05 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:52.100 20:55:05 -- setup/acl.sh@38 -- # setup output config 00:03:52.100 20:55:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.100 20:55:05 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:52.100 20:55:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.477 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:53.477 20:55:06 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:53.477 20:55:06 -- setup/acl.sh@28 -- # local dev driver 00:03:53.477 20:55:06 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:53.477 20:55:06 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:53.477 20:55:06 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:53.477 20:55:06 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:53.477 20:55:06 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:53.477 20:55:06 -- setup/acl.sh@41 -- # setup reset 00:03:53.477 20:55:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.477 20:55:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.063 00:04:00.063 real 0m7.146s 00:04:00.063 user 0m0.830s 00:04:00.063 sys 0m1.362s 00:04:00.063 20:55:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.063 20:55:12 -- common/autotest_common.sh@10 -- # set +x 00:04:00.063 ************************************ 00:04:00.063 END TEST denied 00:04:00.063 ************************************ 00:04:00.063 20:55:12 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:00.063 20:55:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.063 20:55:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.063 20:55:12 -- common/autotest_common.sh@10 -- # set +x 00:04:00.063 ************************************ 00:04:00.063 START TEST allowed 00:04:00.063 ************************************ 00:04:00.063 20:55:12 -- common/autotest_common.sh@1104 -- # allowed 00:04:00.063 20:55:12 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:00.063 20:55:12 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:00.063 20:55:12 -- setup/acl.sh@45 -- # setup output config 00:04:00.063 20:55:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.063 20:55:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.321 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.321 20:55:14 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:00.321 20:55:14 -- setup/acl.sh@28 -- # local dev driver 00:04:00.322 20:55:14 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:00.322 20:55:14 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:00.322 20:55:14 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:00.322 20:55:14 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:00.322 20:55:14 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:00.322 20:55:14 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:00.322 20:55:14 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:00.322 20:55:14 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:00.322 20:55:14 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:00.322 20:55:14 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:00.322 20:55:14 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:00.322 20:55:14 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:00.322 20:55:14 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:00.322 20:55:14 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:00.322 20:55:14 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:00.322 20:55:14 -- setup/acl.sh@48 -- # setup reset 00:04:00.322 20:55:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.322 20:55:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.705 ************************************ 00:04:01.705 END TEST allowed 00:04:01.705 ************************************ 00:04:01.705 00:04:01.705 real 0m2.221s 00:04:01.705 user 0m0.996s 00:04:01.705 sys 0m1.216s 00:04:01.705 20:55:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.705 20:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:01.705 ************************************ 00:04:01.705 END TEST acl 00:04:01.705 ************************************ 00:04:01.705 00:04:01.705 real 0m11.162s 00:04:01.705 user 0m2.631s 00:04:01.705 sys 0m3.581s 00:04:01.705 20:55:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.705 20:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:01.705 20:55:15 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:01.705 20:55:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:01.705 20:55:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.705 20:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:01.705 ************************************ 00:04:01.705 START TEST hugepages 00:04:01.705 ************************************ 00:04:01.705 20:55:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:01.705 * Looking for test storage... 00:04:01.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:01.705 20:55:15 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:01.705 20:55:15 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:01.705 20:55:15 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:01.705 20:55:15 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:01.705 20:55:15 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:01.705 20:55:15 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:01.705 20:55:15 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:01.705 20:55:15 -- setup/common.sh@18 -- # local node= 00:04:01.705 20:55:15 -- setup/common.sh@19 -- # local var val 00:04:01.705 20:55:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.705 20:55:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.705 20:55:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.705 20:55:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.705 20:55:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.705 20:55:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.705 20:55:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5864688 kB' 'MemAvailable: 7401332 kB' 'Buffers: 2436 kB' 'Cached: 1750320 kB' 'SwapCached: 0 kB' 'Active: 442812 kB' 'Inactive: 1410260 kB' 'Active(anon): 110828 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 101912 kB' 'Mapped: 48696 kB' 'Shmem: 10512 kB' 'KReclaimable: 62676 kB' 'Slab: 134916 kB' 'SReclaimable: 62676 kB' 'SUnreclaim: 72240 kB' 'KernelStack: 6252 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 326448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.705 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.705 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.706 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.706 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.707 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 20:55:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.707 20:55:15 -- setup/common.sh@32 -- # continue 00:04:01.707 20:55:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.707 20:55:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.707 20:55:15 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.707 20:55:15 -- setup/common.sh@33 -- # echo 2048 00:04:01.707 20:55:15 -- setup/common.sh@33 -- # return 0 00:04:01.707 20:55:15 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:01.707 20:55:15 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:01.707 20:55:15 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:01.707 20:55:15 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:01.707 20:55:15 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:01.707 20:55:15 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:01.707 20:55:15 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:01.707 20:55:15 -- setup/hugepages.sh@207 -- # get_nodes 00:04:01.707 20:55:15 -- setup/hugepages.sh@27 -- # local node 00:04:01.707 20:55:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.707 20:55:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:01.707 20:55:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.707 20:55:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.707 20:55:15 -- setup/hugepages.sh@208 -- # clear_hp 00:04:01.707 20:55:15 -- setup/hugepages.sh@37 -- # local node hp 00:04:01.707 20:55:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.707 20:55:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.707 20:55:15 -- setup/hugepages.sh@41 -- # echo 0 00:04:01.707 20:55:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.707 20:55:15 -- setup/hugepages.sh@41 -- # echo 0 00:04:01.707 20:55:15 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:01.707 20:55:15 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:01.707 20:55:15 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:01.707 20:55:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:01.707 20:55:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.707 20:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:01.707 ************************************ 00:04:01.707 START TEST default_setup 00:04:01.707 ************************************ 00:04:01.707 20:55:15 -- common/autotest_common.sh@1104 -- # default_setup 00:04:01.707 20:55:15 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:01.707 20:55:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.707 20:55:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.707 20:55:15 -- setup/hugepages.sh@51 -- # shift 00:04:01.707 20:55:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.707 20:55:15 -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.707 20:55:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.707 20:55:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.707 20:55:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.707 20:55:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.707 20:55:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.707 20:55:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.707 20:55:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.707 20:55:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.707 20:55:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.707 20:55:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.707 20:55:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.707 20:55:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.707 20:55:15 -- setup/hugepages.sh@73 -- # return 0 00:04:01.707 20:55:15 -- setup/hugepages.sh@137 -- # setup output 00:04:01.707 20:55:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.707 20:55:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.647 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.647 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.909 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.909 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.909 20:55:16 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:02.909 20:55:16 -- setup/hugepages.sh@89 -- # local node 00:04:02.909 20:55:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.909 20:55:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.909 20:55:16 -- setup/hugepages.sh@92 -- # local surp 00:04:02.909 20:55:16 -- setup/hugepages.sh@93 -- # local resv 00:04:02.909 20:55:16 -- setup/hugepages.sh@94 -- # local anon 00:04:02.909 20:55:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.909 20:55:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.909 20:55:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.909 20:55:16 -- setup/common.sh@18 -- # local node= 00:04:02.909 20:55:16 -- setup/common.sh@19 -- # local var val 00:04:02.909 20:55:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.909 20:55:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.909 20:55:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.909 20:55:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.909 20:55:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.909 20:55:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.909 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.909 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7983004 kB' 'MemAvailable: 9519392 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459420 kB' 'Inactive: 1410288 kB' 'Active(anon): 127436 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118892 kB' 'Mapped: 48696 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133932 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71824 kB' 'KernelStack: 6288 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.910 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.910 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.911 20:55:16 -- setup/common.sh@33 -- # echo 0 00:04:02.911 20:55:16 -- setup/common.sh@33 -- # return 0 00:04:02.911 20:55:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:02.911 20:55:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.911 20:55:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.911 20:55:16 -- setup/common.sh@18 -- # local node= 00:04:02.911 20:55:16 -- setup/common.sh@19 -- # local var val 00:04:02.911 20:55:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.911 20:55:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.911 20:55:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.911 20:55:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.911 20:55:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.911 20:55:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7983264 kB' 'MemAvailable: 9519652 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459612 kB' 'Inactive: 1410288 kB' 'Active(anon): 127628 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118768 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133916 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71808 kB' 'KernelStack: 6288 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.911 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.911 20:55:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.912 20:55:16 -- setup/common.sh@33 -- # echo 0 00:04:02.912 20:55:16 -- setup/common.sh@33 -- # return 0 00:04:02.912 20:55:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:02.912 20:55:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.912 20:55:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.912 20:55:16 -- setup/common.sh@18 -- # local node= 00:04:02.912 20:55:16 -- setup/common.sh@19 -- # local var val 00:04:02.912 20:55:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.912 20:55:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.912 20:55:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.912 20:55:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.912 20:55:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.912 20:55:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7983268 kB' 'MemAvailable: 9519656 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459536 kB' 'Inactive: 1410288 kB' 'Active(anon): 127552 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118720 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133908 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71800 kB' 'KernelStack: 6272 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.912 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.912 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.913 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.913 20:55:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.913 20:55:16 -- setup/common.sh@33 -- # echo 0 00:04:02.913 20:55:16 -- setup/common.sh@33 -- # return 0 00:04:02.913 nr_hugepages=1024 00:04:02.913 resv_hugepages=0 00:04:02.913 surplus_hugepages=0 00:04:02.913 anon_hugepages=0 00:04:02.913 20:55:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:02.913 20:55:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.913 20:55:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.913 20:55:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.913 20:55:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.913 20:55:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.914 20:55:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.914 20:55:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.914 20:55:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.914 20:55:16 -- setup/common.sh@18 -- # local node= 00:04:02.914 20:55:16 -- setup/common.sh@19 -- # local var val 00:04:02.914 20:55:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.914 20:55:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.914 20:55:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.914 20:55:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.914 20:55:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.914 20:55:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7983268 kB' 'MemAvailable: 9519656 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459524 kB' 'Inactive: 1410288 kB' 'Active(anon): 127540 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118660 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133908 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71800 kB' 'KernelStack: 6256 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.914 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.914 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.175 20:55:16 -- setup/common.sh@33 -- # echo 1024 00:04:03.175 20:55:16 -- setup/common.sh@33 -- # return 0 00:04:03.175 20:55:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.175 20:55:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.175 20:55:16 -- setup/hugepages.sh@27 -- # local node 00:04:03.175 20:55:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.175 20:55:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.175 20:55:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.175 20:55:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.175 20:55:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.175 20:55:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.175 20:55:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.175 20:55:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.175 20:55:16 -- setup/common.sh@18 -- # local node=0 00:04:03.175 20:55:16 -- setup/common.sh@19 -- # local var val 00:04:03.175 20:55:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.175 20:55:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.175 20:55:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.175 20:55:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.175 20:55:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.175 20:55:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7983268 kB' 'MemUsed: 4258704 kB' 'SwapCached: 0 kB' 'Active: 459524 kB' 'Inactive: 1410288 kB' 'Active(anon): 127540 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1752748 kB' 'Mapped: 48680 kB' 'AnonPages: 118664 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62108 kB' 'Slab: 133908 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.175 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.175 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # continue 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.176 20:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.176 20:55:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.176 20:55:16 -- setup/common.sh@33 -- # echo 0 00:04:03.176 20:55:16 -- setup/common.sh@33 -- # return 0 00:04:03.176 20:55:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.176 20:55:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.176 20:55:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.176 20:55:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.176 20:55:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.176 node0=1024 expecting 1024 00:04:03.176 ************************************ 00:04:03.176 END TEST default_setup 00:04:03.176 ************************************ 00:04:03.176 20:55:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.176 00:04:03.176 real 0m1.450s 00:04:03.176 user 0m0.649s 00:04:03.176 sys 0m0.743s 00:04:03.176 20:55:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.176 20:55:16 -- common/autotest_common.sh@10 -- # set +x 00:04:03.176 20:55:16 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:03.176 20:55:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.176 20:55:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.176 20:55:16 -- common/autotest_common.sh@10 -- # set +x 00:04:03.176 ************************************ 00:04:03.176 START TEST per_node_1G_alloc 00:04:03.176 ************************************ 00:04:03.176 20:55:16 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:03.176 20:55:16 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:03.176 20:55:16 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:03.176 20:55:16 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:03.176 20:55:16 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.176 20:55:16 -- setup/hugepages.sh@51 -- # shift 00:04:03.176 20:55:16 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.176 20:55:16 -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.176 20:55:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.176 20:55:16 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:03.176 20:55:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.176 20:55:16 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.176 20:55:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.176 20:55:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:03.176 20:55:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.176 20:55:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.176 20:55:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.176 20:55:16 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.176 20:55:16 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.176 20:55:16 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:03.176 20:55:16 -- setup/hugepages.sh@73 -- # return 0 00:04:03.176 20:55:16 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:03.176 20:55:16 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:03.176 20:55:16 -- setup/hugepages.sh@146 -- # setup output 00:04:03.176 20:55:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.176 20:55:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.747 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.747 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.747 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.747 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.747 20:55:17 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:03.747 20:55:17 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:03.747 20:55:17 -- setup/hugepages.sh@89 -- # local node 00:04:03.747 20:55:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.747 20:55:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.747 20:55:17 -- setup/hugepages.sh@92 -- # local surp 00:04:03.747 20:55:17 -- setup/hugepages.sh@93 -- # local resv 00:04:03.747 20:55:17 -- setup/hugepages.sh@94 -- # local anon 00:04:03.747 20:55:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.747 20:55:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.747 20:55:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.747 20:55:17 -- setup/common.sh@18 -- # local node= 00:04:03.747 20:55:17 -- setup/common.sh@19 -- # local var val 00:04:03.747 20:55:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.747 20:55:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.747 20:55:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.747 20:55:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.747 20:55:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.747 20:55:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 20:55:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9030112 kB' 'MemAvailable: 10566504 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 460104 kB' 'Inactive: 1410292 kB' 'Active(anon): 128120 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 119212 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133904 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71796 kB' 'KernelStack: 6244 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.747 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.747 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.748 20:55:17 -- setup/common.sh@33 -- # echo 0 00:04:03.748 20:55:17 -- setup/common.sh@33 -- # return 0 00:04:03.748 20:55:17 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.748 20:55:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.748 20:55:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.748 20:55:17 -- setup/common.sh@18 -- # local node= 00:04:03.748 20:55:17 -- setup/common.sh@19 -- # local var val 00:04:03.748 20:55:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.748 20:55:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.748 20:55:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.748 20:55:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.748 20:55:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.748 20:55:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9030112 kB' 'MemAvailable: 10566504 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459456 kB' 'Inactive: 1410292 kB' 'Active(anon): 127472 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118564 kB' 'Mapped: 48588 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133904 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71796 kB' 'KernelStack: 6244 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.748 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.748 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.749 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.749 20:55:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.750 20:55:17 -- setup/common.sh@33 -- # echo 0 00:04:03.750 20:55:17 -- setup/common.sh@33 -- # return 0 00:04:03.750 20:55:17 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.750 20:55:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.750 20:55:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.750 20:55:17 -- setup/common.sh@18 -- # local node= 00:04:03.750 20:55:17 -- setup/common.sh@19 -- # local var val 00:04:03.750 20:55:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.750 20:55:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.750 20:55:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.750 20:55:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.750 20:55:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.750 20:55:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9030112 kB' 'MemAvailable: 10566504 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459692 kB' 'Inactive: 1410292 kB' 'Active(anon): 127708 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118808 kB' 'Mapped: 48648 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133900 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71792 kB' 'KernelStack: 6244 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.750 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.750 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.751 20:55:17 -- setup/common.sh@33 -- # echo 0 00:04:03.751 20:55:17 -- setup/common.sh@33 -- # return 0 00:04:03.751 nr_hugepages=512 00:04:03.751 resv_hugepages=0 00:04:03.751 surplus_hugepages=0 00:04:03.751 anon_hugepages=0 00:04:03.751 20:55:17 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.751 20:55:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:03.751 20:55:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.751 20:55:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.751 20:55:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.751 20:55:17 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:03.751 20:55:17 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:03.751 20:55:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.751 20:55:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.751 20:55:17 -- setup/common.sh@18 -- # local node= 00:04:03.751 20:55:17 -- setup/common.sh@19 -- # local var val 00:04:03.751 20:55:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.751 20:55:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.751 20:55:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.751 20:55:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.751 20:55:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.751 20:55:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9030112 kB' 'MemAvailable: 10566504 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459660 kB' 'Inactive: 1410292 kB' 'Active(anon): 127676 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118824 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133916 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71808 kB' 'KernelStack: 6272 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.751 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.751 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.752 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.752 20:55:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.752 20:55:17 -- setup/common.sh@33 -- # echo 512 00:04:03.752 20:55:17 -- setup/common.sh@33 -- # return 0 00:04:03.752 20:55:17 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:03.752 20:55:17 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.752 20:55:17 -- setup/hugepages.sh@27 -- # local node 00:04:03.752 20:55:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.752 20:55:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.752 20:55:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.752 20:55:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.752 20:55:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.752 20:55:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.753 20:55:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.753 20:55:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.753 20:55:17 -- setup/common.sh@18 -- # local node=0 00:04:03.753 20:55:17 -- setup/common.sh@19 -- # local var val 00:04:03.753 20:55:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.753 20:55:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.753 20:55:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.753 20:55:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.753 20:55:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.753 20:55:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9030112 kB' 'MemUsed: 3211860 kB' 'SwapCached: 0 kB' 'Active: 459332 kB' 'Inactive: 1410292 kB' 'Active(anon): 127348 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1752748 kB' 'Mapped: 48680 kB' 'AnonPages: 118712 kB' 'Shmem: 10472 kB' 'KernelStack: 6240 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62108 kB' 'Slab: 133912 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # continue 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.753 20:55:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.753 20:55:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.753 20:55:17 -- setup/common.sh@33 -- # echo 0 00:04:03.753 20:55:17 -- setup/common.sh@33 -- # return 0 00:04:04.013 node0=512 expecting 512 00:04:04.013 ************************************ 00:04:04.013 END TEST per_node_1G_alloc 00:04:04.013 ************************************ 00:04:04.013 20:55:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.013 20:55:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.013 20:55:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.013 20:55:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.013 20:55:17 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:04.013 20:55:17 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:04.013 00:04:04.013 real 0m0.735s 00:04:04.013 user 0m0.317s 00:04:04.013 sys 0m0.429s 00:04:04.013 20:55:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.013 20:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:04.013 20:55:17 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:04.013 20:55:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:04.013 20:55:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.013 20:55:17 -- common/autotest_common.sh@10 -- # set +x 00:04:04.013 ************************************ 00:04:04.013 START TEST even_2G_alloc 00:04:04.013 ************************************ 00:04:04.013 20:55:17 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:04.013 20:55:17 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:04.013 20:55:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:04.013 20:55:17 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.013 20:55:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.013 20:55:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:04.013 20:55:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.013 20:55:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.013 20:55:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.013 20:55:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.013 20:55:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.013 20:55:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.013 20:55:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.013 20:55:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.013 20:55:17 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.013 20:55:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.013 20:55:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:04.013 20:55:17 -- setup/hugepages.sh@83 -- # : 0 00:04:04.013 20:55:17 -- setup/hugepages.sh@84 -- # : 0 00:04:04.013 20:55:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.013 20:55:17 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:04.013 20:55:17 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:04.013 20:55:17 -- setup/hugepages.sh@153 -- # setup output 00:04:04.013 20:55:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.013 20:55:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.585 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.585 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.585 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.585 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.585 20:55:18 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:04.585 20:55:18 -- setup/hugepages.sh@89 -- # local node 00:04:04.585 20:55:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.585 20:55:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.585 20:55:18 -- setup/hugepages.sh@92 -- # local surp 00:04:04.585 20:55:18 -- setup/hugepages.sh@93 -- # local resv 00:04:04.585 20:55:18 -- setup/hugepages.sh@94 -- # local anon 00:04:04.585 20:55:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.585 20:55:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.585 20:55:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.585 20:55:18 -- setup/common.sh@18 -- # local node= 00:04:04.585 20:55:18 -- setup/common.sh@19 -- # local var val 00:04:04.585 20:55:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.585 20:55:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.585 20:55:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.585 20:55:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.585 20:55:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.585 20:55:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981084 kB' 'MemAvailable: 9517476 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 460232 kB' 'Inactive: 1410292 kB' 'Active(anon): 128248 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 119392 kB' 'Mapped: 48732 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133912 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71804 kB' 'KernelStack: 6312 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.585 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.586 20:55:18 -- setup/common.sh@33 -- # echo 0 00:04:04.586 20:55:18 -- setup/common.sh@33 -- # return 0 00:04:04.586 20:55:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.586 20:55:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.586 20:55:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.586 20:55:18 -- setup/common.sh@18 -- # local node= 00:04:04.586 20:55:18 -- setup/common.sh@19 -- # local var val 00:04:04.586 20:55:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.586 20:55:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.586 20:55:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.586 20:55:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.586 20:55:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.586 20:55:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981084 kB' 'MemAvailable: 9517476 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459868 kB' 'Inactive: 1410292 kB' 'Active(anon): 127884 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118988 kB' 'Mapped: 48856 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133912 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71804 kB' 'KernelStack: 6264 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 20:55:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.587 20:55:18 -- setup/common.sh@33 -- # echo 0 00:04:04.587 20:55:18 -- setup/common.sh@33 -- # return 0 00:04:04.587 20:55:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:04.587 20:55:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.587 20:55:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.587 20:55:18 -- setup/common.sh@18 -- # local node= 00:04:04.587 20:55:18 -- setup/common.sh@19 -- # local var val 00:04:04.587 20:55:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.587 20:55:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.587 20:55:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.587 20:55:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.587 20:55:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.587 20:55:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981084 kB' 'MemAvailable: 9517476 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459672 kB' 'Inactive: 1410292 kB' 'Active(anon): 127688 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118788 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133972 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71864 kB' 'KernelStack: 6272 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 20:55:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 20:55:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 20:55:18 -- setup/common.sh@33 -- # echo 0 00:04:04.588 20:55:18 -- setup/common.sh@33 -- # return 0 00:04:04.588 20:55:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:04.588 nr_hugepages=1024 00:04:04.588 20:55:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.588 resv_hugepages=0 00:04:04.588 20:55:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.588 surplus_hugepages=0 00:04:04.588 anon_hugepages=0 00:04:04.588 20:55:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.588 20:55:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.588 20:55:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.588 20:55:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.589 20:55:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.589 20:55:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.589 20:55:18 -- setup/common.sh@18 -- # local node= 00:04:04.589 20:55:18 -- setup/common.sh@19 -- # local var val 00:04:04.589 20:55:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.589 20:55:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.589 20:55:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.589 20:55:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.589 20:55:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.589 20:55:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981084 kB' 'MemAvailable: 9517476 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459620 kB' 'Inactive: 1410292 kB' 'Active(anon): 127636 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118732 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133972 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71864 kB' 'KernelStack: 6256 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.590 20:55:18 -- setup/common.sh@33 -- # echo 1024 00:04:04.590 20:55:18 -- setup/common.sh@33 -- # return 0 00:04:04.590 20:55:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.590 20:55:18 -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.590 20:55:18 -- setup/hugepages.sh@27 -- # local node 00:04:04.590 20:55:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.590 20:55:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.590 20:55:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.590 20:55:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.590 20:55:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.590 20:55:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.590 20:55:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.590 20:55:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.590 20:55:18 -- setup/common.sh@18 -- # local node=0 00:04:04.590 20:55:18 -- setup/common.sh@19 -- # local var val 00:04:04.590 20:55:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.590 20:55:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.590 20:55:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.590 20:55:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.590 20:55:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.590 20:55:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981084 kB' 'MemUsed: 4260888 kB' 'SwapCached: 0 kB' 'Active: 459660 kB' 'Inactive: 1410292 kB' 'Active(anon): 127676 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1752748 kB' 'Mapped: 48680 kB' 'AnonPages: 118844 kB' 'Shmem: 10472 kB' 'KernelStack: 6272 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62108 kB' 'Slab: 133972 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.590 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # continue 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.591 20:55:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.591 20:55:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.591 20:55:18 -- setup/common.sh@33 -- # echo 0 00:04:04.591 20:55:18 -- setup/common.sh@33 -- # return 0 00:04:04.591 20:55:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.591 20:55:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.591 20:55:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.591 node0=1024 expecting 1024 00:04:04.591 20:55:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.591 20:55:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.591 20:55:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.591 00:04:04.591 real 0m0.759s 00:04:04.591 user 0m0.329s 00:04:04.591 sys 0m0.450s 00:04:04.591 20:55:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.591 ************************************ 00:04:04.591 END TEST even_2G_alloc 00:04:04.591 ************************************ 00:04:04.591 20:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:04.850 20:55:18 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:04.850 20:55:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:04.850 20:55:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.850 20:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:04.850 ************************************ 00:04:04.850 START TEST odd_alloc 00:04:04.850 ************************************ 00:04:04.850 20:55:18 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:04.850 20:55:18 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:04.850 20:55:18 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:04.850 20:55:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.850 20:55:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.850 20:55:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:04.850 20:55:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.850 20:55:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.850 20:55:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.850 20:55:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:04.850 20:55:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.850 20:55:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.850 20:55:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.850 20:55:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.850 20:55:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.850 20:55:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.850 20:55:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:04.850 20:55:18 -- setup/hugepages.sh@83 -- # : 0 00:04:04.850 20:55:18 -- setup/hugepages.sh@84 -- # : 0 00:04:04.850 20:55:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.850 20:55:18 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:04.850 20:55:18 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:04.850 20:55:18 -- setup/hugepages.sh@160 -- # setup output 00:04:04.850 20:55:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.850 20:55:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.109 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.370 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.370 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.370 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.370 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.370 20:55:19 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:05.370 20:55:19 -- setup/hugepages.sh@89 -- # local node 00:04:05.370 20:55:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.370 20:55:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.370 20:55:19 -- setup/hugepages.sh@92 -- # local surp 00:04:05.370 20:55:19 -- setup/hugepages.sh@93 -- # local resv 00:04:05.370 20:55:19 -- setup/hugepages.sh@94 -- # local anon 00:04:05.370 20:55:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.370 20:55:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.370 20:55:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.370 20:55:19 -- setup/common.sh@18 -- # local node= 00:04:05.370 20:55:19 -- setup/common.sh@19 -- # local var val 00:04:05.370 20:55:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.370 20:55:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.370 20:55:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.371 20:55:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.371 20:55:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.371 20:55:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7978076 kB' 'MemAvailable: 9514468 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459800 kB' 'Inactive: 1410292 kB' 'Active(anon): 127816 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 119172 kB' 'Mapped: 48932 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133964 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71856 kB' 'KernelStack: 6284 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.371 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.371 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.372 20:55:19 -- setup/common.sh@33 -- # echo 0 00:04:05.372 20:55:19 -- setup/common.sh@33 -- # return 0 00:04:05.372 20:55:19 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.372 20:55:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.372 20:55:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.372 20:55:19 -- setup/common.sh@18 -- # local node= 00:04:05.372 20:55:19 -- setup/common.sh@19 -- # local var val 00:04:05.372 20:55:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.372 20:55:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.372 20:55:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.372 20:55:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.372 20:55:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.372 20:55:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7978076 kB' 'MemAvailable: 9514468 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459556 kB' 'Inactive: 1410292 kB' 'Active(anon): 127572 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118676 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133992 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71884 kB' 'KernelStack: 6256 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.372 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.372 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.373 20:55:19 -- setup/common.sh@33 -- # echo 0 00:04:05.373 20:55:19 -- setup/common.sh@33 -- # return 0 00:04:05.373 20:55:19 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.373 20:55:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.373 20:55:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.373 20:55:19 -- setup/common.sh@18 -- # local node= 00:04:05.373 20:55:19 -- setup/common.sh@19 -- # local var val 00:04:05.373 20:55:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.373 20:55:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.373 20:55:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.373 20:55:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.373 20:55:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.373 20:55:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7978076 kB' 'MemAvailable: 9514468 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459348 kB' 'Inactive: 1410292 kB' 'Active(anon): 127364 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118724 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133992 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71884 kB' 'KernelStack: 6256 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.373 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.373 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.374 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.374 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.375 20:55:19 -- setup/common.sh@33 -- # echo 0 00:04:05.375 20:55:19 -- setup/common.sh@33 -- # return 0 00:04:05.375 nr_hugepages=1025 00:04:05.375 20:55:19 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.375 20:55:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:05.375 resv_hugepages=0 00:04:05.375 20:55:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.375 surplus_hugepages=0 00:04:05.375 anon_hugepages=0 00:04:05.375 20:55:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.375 20:55:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.375 20:55:19 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:05.375 20:55:19 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:05.375 20:55:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.375 20:55:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.375 20:55:19 -- setup/common.sh@18 -- # local node= 00:04:05.375 20:55:19 -- setup/common.sh@19 -- # local var val 00:04:05.375 20:55:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.375 20:55:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.375 20:55:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.375 20:55:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.375 20:55:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.375 20:55:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7978076 kB' 'MemAvailable: 9514468 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 459612 kB' 'Inactive: 1410292 kB' 'Active(anon): 127628 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118868 kB' 'Mapped: 48940 kB' 'Shmem: 10472 kB' 'KReclaimable: 62108 kB' 'Slab: 133992 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71884 kB' 'KernelStack: 6320 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 345096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.375 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.375 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.376 20:55:19 -- setup/common.sh@33 -- # echo 1025 00:04:05.376 20:55:19 -- setup/common.sh@33 -- # return 0 00:04:05.376 20:55:19 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:05.376 20:55:19 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.376 20:55:19 -- setup/hugepages.sh@27 -- # local node 00:04:05.376 20:55:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.376 20:55:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:05.376 20:55:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.376 20:55:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.376 20:55:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.376 20:55:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.376 20:55:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.376 20:55:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.376 20:55:19 -- setup/common.sh@18 -- # local node=0 00:04:05.376 20:55:19 -- setup/common.sh@19 -- # local var val 00:04:05.376 20:55:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.376 20:55:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.376 20:55:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.376 20:55:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.376 20:55:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.376 20:55:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7978076 kB' 'MemUsed: 4263896 kB' 'SwapCached: 0 kB' 'Active: 459480 kB' 'Inactive: 1410292 kB' 'Active(anon): 127496 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1752748 kB' 'Mapped: 48680 kB' 'AnonPages: 118944 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62108 kB' 'Slab: 133940 kB' 'SReclaimable: 62108 kB' 'SUnreclaim: 71832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.376 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.376 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # continue 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.377 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.377 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.377 20:55:19 -- setup/common.sh@33 -- # echo 0 00:04:05.377 20:55:19 -- setup/common.sh@33 -- # return 0 00:04:05.637 node0=1025 expecting 1025 00:04:05.637 ************************************ 00:04:05.637 END TEST odd_alloc 00:04:05.637 ************************************ 00:04:05.637 20:55:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.637 20:55:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.637 20:55:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.637 20:55:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.637 20:55:19 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:05.637 20:55:19 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:05.637 00:04:05.637 real 0m0.759s 00:04:05.637 user 0m0.345s 00:04:05.637 sys 0m0.434s 00:04:05.637 20:55:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.637 20:55:19 -- common/autotest_common.sh@10 -- # set +x 00:04:05.637 20:55:19 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:05.637 20:55:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:05.637 20:55:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:05.637 20:55:19 -- common/autotest_common.sh@10 -- # set +x 00:04:05.637 ************************************ 00:04:05.637 START TEST custom_alloc 00:04:05.637 ************************************ 00:04:05.637 20:55:19 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:05.637 20:55:19 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:05.637 20:55:19 -- setup/hugepages.sh@169 -- # local node 00:04:05.637 20:55:19 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:05.637 20:55:19 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:05.637 20:55:19 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:05.637 20:55:19 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:05.637 20:55:19 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:05.637 20:55:19 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.637 20:55:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.637 20:55:19 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:05.637 20:55:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.637 20:55:19 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.637 20:55:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.637 20:55:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.637 20:55:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.637 20:55:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.637 20:55:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.637 20:55:19 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.637 20:55:19 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.637 20:55:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.637 20:55:19 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.637 20:55:19 -- setup/hugepages.sh@83 -- # : 0 00:04:05.637 20:55:19 -- setup/hugepages.sh@84 -- # : 0 00:04:05.637 20:55:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.637 20:55:19 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:05.637 20:55:19 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:05.637 20:55:19 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:05.637 20:55:19 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:05.637 20:55:19 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:05.637 20:55:19 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:05.637 20:55:19 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.637 20:55:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.637 20:55:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.637 20:55:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.637 20:55:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.637 20:55:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.637 20:55:19 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.637 20:55:19 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:05.637 20:55:19 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:05.637 20:55:19 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:05.637 20:55:19 -- setup/hugepages.sh@78 -- # return 0 00:04:05.637 20:55:19 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:05.637 20:55:19 -- setup/hugepages.sh@187 -- # setup output 00:04:05.637 20:55:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.637 20:55:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.208 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.208 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.208 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.208 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.208 20:55:19 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:06.208 20:55:19 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:06.208 20:55:19 -- setup/hugepages.sh@89 -- # local node 00:04:06.208 20:55:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.209 20:55:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.209 20:55:19 -- setup/hugepages.sh@92 -- # local surp 00:04:06.209 20:55:19 -- setup/hugepages.sh@93 -- # local resv 00:04:06.209 20:55:19 -- setup/hugepages.sh@94 -- # local anon 00:04:06.209 20:55:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.209 20:55:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.209 20:55:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.209 20:55:19 -- setup/common.sh@18 -- # local node= 00:04:06.209 20:55:19 -- setup/common.sh@19 -- # local var val 00:04:06.209 20:55:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.209 20:55:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.209 20:55:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.209 20:55:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.209 20:55:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.209 20:55:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9025216 kB' 'MemAvailable: 10561604 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 457440 kB' 'Inactive: 1410292 kB' 'Active(anon): 125456 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 116544 kB' 'Mapped: 48080 kB' 'Shmem: 10472 kB' 'KReclaimable: 62100 kB' 'Slab: 133884 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71784 kB' 'KernelStack: 6268 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 334584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.209 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.209 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.210 20:55:19 -- setup/common.sh@33 -- # echo 0 00:04:06.210 20:55:19 -- setup/common.sh@33 -- # return 0 00:04:06.210 20:55:19 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.210 20:55:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.210 20:55:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.210 20:55:19 -- setup/common.sh@18 -- # local node= 00:04:06.210 20:55:19 -- setup/common.sh@19 -- # local var val 00:04:06.210 20:55:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.210 20:55:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.210 20:55:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.210 20:55:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.210 20:55:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.210 20:55:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9025216 kB' 'MemAvailable: 10561604 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 456644 kB' 'Inactive: 1410292 kB' 'Active(anon): 124660 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 115728 kB' 'Mapped: 47996 kB' 'Shmem: 10472 kB' 'KReclaimable: 62100 kB' 'Slab: 133884 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71784 kB' 'KernelStack: 6172 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 334584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.210 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.210 20:55:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.211 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.211 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.212 20:55:19 -- setup/common.sh@33 -- # echo 0 00:04:06.212 20:55:19 -- setup/common.sh@33 -- # return 0 00:04:06.212 20:55:19 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.212 20:55:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.212 20:55:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.212 20:55:19 -- setup/common.sh@18 -- # local node= 00:04:06.212 20:55:19 -- setup/common.sh@19 -- # local var val 00:04:06.212 20:55:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.212 20:55:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.212 20:55:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.212 20:55:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.212 20:55:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.212 20:55:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9025216 kB' 'MemAvailable: 10561604 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 456700 kB' 'Inactive: 1410292 kB' 'Active(anon): 124716 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 116096 kB' 'Mapped: 47996 kB' 'Shmem: 10472 kB' 'KReclaimable: 62100 kB' 'Slab: 133884 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71784 kB' 'KernelStack: 6188 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 334584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:06.212 20:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.212 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.212 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.213 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.213 20:55:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.214 20:55:20 -- setup/common.sh@33 -- # echo 0 00:04:06.214 20:55:20 -- setup/common.sh@33 -- # return 0 00:04:06.214 nr_hugepages=512 00:04:06.214 resv_hugepages=0 00:04:06.214 surplus_hugepages=0 00:04:06.214 anon_hugepages=0 00:04:06.214 20:55:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.214 20:55:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:06.214 20:55:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.214 20:55:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.214 20:55:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.214 20:55:20 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:06.214 20:55:20 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:06.214 20:55:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.214 20:55:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.214 20:55:20 -- setup/common.sh@18 -- # local node= 00:04:06.214 20:55:20 -- setup/common.sh@19 -- # local var val 00:04:06.214 20:55:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.214 20:55:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.214 20:55:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.214 20:55:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.214 20:55:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.214 20:55:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9025564 kB' 'MemAvailable: 10561952 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 456640 kB' 'Inactive: 1410292 kB' 'Active(anon): 124656 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 116020 kB' 'Mapped: 47940 kB' 'Shmem: 10472 kB' 'KReclaimable: 62100 kB' 'Slab: 133848 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71748 kB' 'KernelStack: 6208 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 334584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.214 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.214 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.215 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.215 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.216 20:55:20 -- setup/common.sh@33 -- # echo 512 00:04:06.216 20:55:20 -- setup/common.sh@33 -- # return 0 00:04:06.216 20:55:20 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:06.216 20:55:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.216 20:55:20 -- setup/hugepages.sh@27 -- # local node 00:04:06.216 20:55:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.216 20:55:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.216 20:55:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.216 20:55:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.216 20:55:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.216 20:55:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.216 20:55:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.216 20:55:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.216 20:55:20 -- setup/common.sh@18 -- # local node=0 00:04:06.216 20:55:20 -- setup/common.sh@19 -- # local var val 00:04:06.216 20:55:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.216 20:55:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.216 20:55:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.216 20:55:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.216 20:55:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.216 20:55:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9025564 kB' 'MemUsed: 3216408 kB' 'SwapCached: 0 kB' 'Active: 456624 kB' 'Inactive: 1410292 kB' 'Active(anon): 124640 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1752748 kB' 'Mapped: 47940 kB' 'AnonPages: 116016 kB' 'Shmem: 10472 kB' 'KernelStack: 6208 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62100 kB' 'Slab: 133844 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.216 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.216 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.217 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.217 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.217 20:55:20 -- setup/common.sh@33 -- # echo 0 00:04:06.217 20:55:20 -- setup/common.sh@33 -- # return 0 00:04:06.217 node0=512 expecting 512 00:04:06.217 20:55:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.217 20:55:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.217 20:55:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.217 20:55:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.217 20:55:20 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.217 20:55:20 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:06.217 00:04:06.217 real 0m0.775s 00:04:06.217 user 0m0.351s 00:04:06.217 sys 0m0.435s 00:04:06.217 20:55:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.217 ************************************ 00:04:06.217 END TEST custom_alloc 00:04:06.217 ************************************ 00:04:06.217 20:55:20 -- common/autotest_common.sh@10 -- # set +x 00:04:06.477 20:55:20 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:06.477 20:55:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.477 20:55:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.477 20:55:20 -- common/autotest_common.sh@10 -- # set +x 00:04:06.477 ************************************ 00:04:06.477 START TEST no_shrink_alloc 00:04:06.477 ************************************ 00:04:06.477 20:55:20 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:06.477 20:55:20 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:06.477 20:55:20 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.477 20:55:20 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:06.477 20:55:20 -- setup/hugepages.sh@51 -- # shift 00:04:06.477 20:55:20 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:06.477 20:55:20 -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.477 20:55:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.477 20:55:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.477 20:55:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:06.477 20:55:20 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:06.477 20:55:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.477 20:55:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.477 20:55:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:06.477 20:55:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.477 20:55:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.477 20:55:20 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:06.477 20:55:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.477 20:55:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:06.477 20:55:20 -- setup/hugepages.sh@73 -- # return 0 00:04:06.477 20:55:20 -- setup/hugepages.sh@198 -- # setup output 00:04:06.477 20:55:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.477 20:55:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.998 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.998 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.998 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.998 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.998 20:55:20 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:06.998 20:55:20 -- setup/hugepages.sh@89 -- # local node 00:04:06.998 20:55:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.998 20:55:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.998 20:55:20 -- setup/hugepages.sh@92 -- # local surp 00:04:06.998 20:55:20 -- setup/hugepages.sh@93 -- # local resv 00:04:06.998 20:55:20 -- setup/hugepages.sh@94 -- # local anon 00:04:06.998 20:55:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.998 20:55:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.998 20:55:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.998 20:55:20 -- setup/common.sh@18 -- # local node= 00:04:06.998 20:55:20 -- setup/common.sh@19 -- # local var val 00:04:06.998 20:55:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.998 20:55:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.998 20:55:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.998 20:55:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.998 20:55:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.998 20:55:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.998 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.998 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7978164 kB' 'MemAvailable: 9514552 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 457080 kB' 'Inactive: 1410292 kB' 'Active(anon): 125096 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116244 kB' 'Mapped: 48236 kB' 'Shmem: 10472 kB' 'KReclaimable: 62100 kB' 'Slab: 133824 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71724 kB' 'KernelStack: 6252 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # continue 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.999 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.999 20:55:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.999 20:55:20 -- setup/common.sh@33 -- # echo 0 00:04:06.999 20:55:20 -- setup/common.sh@33 -- # return 0 00:04:06.999 20:55:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.999 20:55:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.999 20:55:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.999 20:55:20 -- setup/common.sh@18 -- # local node= 00:04:06.999 20:55:20 -- setup/common.sh@19 -- # local var val 00:04:06.999 20:55:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.999 20:55:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.000 20:55:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.000 20:55:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.000 20:55:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.000 20:55:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7978164 kB' 'MemAvailable: 9514552 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 456612 kB' 'Inactive: 1410292 kB' 'Active(anon): 124628 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115988 kB' 'Mapped: 48000 kB' 'Shmem: 10472 kB' 'KReclaimable: 62100 kB' 'Slab: 133828 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71728 kB' 'KernelStack: 6192 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.000 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.000 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.001 20:55:20 -- setup/common.sh@33 -- # echo 0 00:04:07.001 20:55:20 -- setup/common.sh@33 -- # return 0 00:04:07.001 20:55:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:07.001 20:55:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.001 20:55:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.001 20:55:20 -- setup/common.sh@18 -- # local node= 00:04:07.001 20:55:20 -- setup/common.sh@19 -- # local var val 00:04:07.001 20:55:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.001 20:55:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.001 20:55:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.001 20:55:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.001 20:55:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.001 20:55:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7978308 kB' 'MemAvailable: 9514696 kB' 'Buffers: 2436 kB' 'Cached: 1750312 kB' 'SwapCached: 0 kB' 'Active: 456592 kB' 'Inactive: 1410292 kB' 'Active(anon): 124608 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115952 kB' 'Mapped: 48200 kB' 'Shmem: 10472 kB' 'KReclaimable: 62100 kB' 'Slab: 133828 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71728 kB' 'KernelStack: 6224 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.001 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.001 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.002 20:55:20 -- setup/common.sh@33 -- # echo 0 00:04:07.002 20:55:20 -- setup/common.sh@33 -- # return 0 00:04:07.002 nr_hugepages=1024 00:04:07.002 resv_hugepages=0 00:04:07.002 surplus_hugepages=0 00:04:07.002 20:55:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:07.002 20:55:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.002 20:55:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.002 20:55:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.002 anon_hugepages=0 00:04:07.002 20:55:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.002 20:55:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.002 20:55:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.002 20:55:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.002 20:55:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.002 20:55:20 -- setup/common.sh@18 -- # local node= 00:04:07.002 20:55:20 -- setup/common.sh@19 -- # local var val 00:04:07.002 20:55:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.002 20:55:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.002 20:55:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.002 20:55:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.002 20:55:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.002 20:55:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7978308 kB' 'MemAvailable: 9514700 kB' 'Buffers: 2436 kB' 'Cached: 1750316 kB' 'SwapCached: 0 kB' 'Active: 456536 kB' 'Inactive: 1410296 kB' 'Active(anon): 124552 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115716 kB' 'Mapped: 47940 kB' 'Shmem: 10472 kB' 'KReclaimable: 62100 kB' 'Slab: 133824 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71724 kB' 'KernelStack: 6208 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.002 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.002 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.003 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.003 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.004 20:55:20 -- setup/common.sh@33 -- # echo 1024 00:04:07.004 20:55:20 -- setup/common.sh@33 -- # return 0 00:04:07.004 20:55:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.004 20:55:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.004 20:55:20 -- setup/hugepages.sh@27 -- # local node 00:04:07.004 20:55:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.004 20:55:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.004 20:55:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:07.004 20:55:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.004 20:55:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.004 20:55:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.004 20:55:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.004 20:55:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.004 20:55:20 -- setup/common.sh@18 -- # local node=0 00:04:07.004 20:55:20 -- setup/common.sh@19 -- # local var val 00:04:07.004 20:55:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.004 20:55:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.004 20:55:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.004 20:55:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.004 20:55:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.004 20:55:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7978308 kB' 'MemUsed: 4263664 kB' 'SwapCached: 0 kB' 'Active: 456552 kB' 'Inactive: 1410296 kB' 'Active(anon): 124568 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1752752 kB' 'Mapped: 47940 kB' 'AnonPages: 116040 kB' 'Shmem: 10472 kB' 'KernelStack: 6208 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62100 kB' 'Slab: 133824 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.004 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.004 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # continue 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.005 20:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.005 20:55:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.005 20:55:20 -- setup/common.sh@33 -- # echo 0 00:04:07.005 20:55:20 -- setup/common.sh@33 -- # return 0 00:04:07.005 20:55:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.005 node0=1024 expecting 1024 00:04:07.005 20:55:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.005 20:55:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.005 20:55:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.005 20:55:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.005 20:55:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.005 20:55:20 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:07.005 20:55:20 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:07.005 20:55:20 -- setup/hugepages.sh@202 -- # setup output 00:04:07.005 20:55:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.005 20:55:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.573 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.573 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.573 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.573 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.573 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:07.573 20:55:21 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:07.573 20:55:21 -- setup/hugepages.sh@89 -- # local node 00:04:07.574 20:55:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.574 20:55:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.574 20:55:21 -- setup/hugepages.sh@92 -- # local surp 00:04:07.574 20:55:21 -- setup/hugepages.sh@93 -- # local resv 00:04:07.574 20:55:21 -- setup/hugepages.sh@94 -- # local anon 00:04:07.574 20:55:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.574 20:55:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.574 20:55:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.574 20:55:21 -- setup/common.sh@18 -- # local node= 00:04:07.574 20:55:21 -- setup/common.sh@19 -- # local var val 00:04:07.574 20:55:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.574 20:55:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.574 20:55:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.574 20:55:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.574 20:55:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.574 20:55:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7980248 kB' 'MemAvailable: 9516640 kB' 'Buffers: 2436 kB' 'Cached: 1750316 kB' 'SwapCached: 0 kB' 'Active: 457112 kB' 'Inactive: 1410296 kB' 'Active(anon): 125128 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116296 kB' 'Mapped: 48112 kB' 'Shmem: 10472 kB' 'KReclaimable: 62100 kB' 'Slab: 133792 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71692 kB' 'KernelStack: 6256 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.574 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.574 20:55:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 20:55:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.837 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.837 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.837 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.837 20:55:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.837 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.837 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.837 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.837 20:55:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.837 20:55:21 -- setup/common.sh@33 -- # echo 0 00:04:07.837 20:55:21 -- setup/common.sh@33 -- # return 0 00:04:07.837 20:55:21 -- setup/hugepages.sh@97 -- # anon=0 00:04:07.837 20:55:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.837 20:55:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.837 20:55:21 -- setup/common.sh@18 -- # local node= 00:04:07.837 20:55:21 -- setup/common.sh@19 -- # local var val 00:04:07.837 20:55:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.837 20:55:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.837 20:55:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.837 20:55:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.837 20:55:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.837 20:55:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.837 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.837 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.837 20:55:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7980924 kB' 'MemAvailable: 9517316 kB' 'Buffers: 2436 kB' 'Cached: 1750316 kB' 'SwapCached: 0 kB' 'Active: 456640 kB' 'Inactive: 1410296 kB' 'Active(anon): 124656 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116064 kB' 'Mapped: 47940 kB' 'Shmem: 10472 kB' 'KReclaimable: 62100 kB' 'Slab: 133792 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71692 kB' 'KernelStack: 6208 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:07.837 20:55:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.837 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.837 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.837 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.837 20:55:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.837 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.837 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.837 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.837 20:55:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.837 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.837 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.837 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.837 20:55:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.838 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.838 20:55:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.839 20:55:21 -- setup/common.sh@33 -- # echo 0 00:04:07.839 20:55:21 -- setup/common.sh@33 -- # return 0 00:04:07.839 20:55:21 -- setup/hugepages.sh@99 -- # surp=0 00:04:07.839 20:55:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.839 20:55:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.839 20:55:21 -- setup/common.sh@18 -- # local node= 00:04:07.839 20:55:21 -- setup/common.sh@19 -- # local var val 00:04:07.839 20:55:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.839 20:55:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.839 20:55:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.839 20:55:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.839 20:55:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.839 20:55:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7980672 kB' 'MemAvailable: 9517064 kB' 'Buffers: 2436 kB' 'Cached: 1750316 kB' 'SwapCached: 0 kB' 'Active: 456848 kB' 'Inactive: 1410296 kB' 'Active(anon): 124864 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116076 kB' 'Mapped: 47940 kB' 'Shmem: 10472 kB' 'KReclaimable: 62100 kB' 'Slab: 133792 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71692 kB' 'KernelStack: 6208 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.839 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.839 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.840 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.840 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.841 20:55:21 -- setup/common.sh@33 -- # echo 0 00:04:07.841 20:55:21 -- setup/common.sh@33 -- # return 0 00:04:07.841 nr_hugepages=1024 00:04:07.841 resv_hugepages=0 00:04:07.841 surplus_hugepages=0 00:04:07.841 20:55:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:07.841 20:55:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.841 20:55:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.841 20:55:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.841 anon_hugepages=0 00:04:07.841 20:55:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.841 20:55:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.841 20:55:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.841 20:55:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.841 20:55:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.841 20:55:21 -- setup/common.sh@18 -- # local node= 00:04:07.841 20:55:21 -- setup/common.sh@19 -- # local var val 00:04:07.841 20:55:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.841 20:55:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.841 20:55:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.841 20:55:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.841 20:55:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.841 20:55:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7980672 kB' 'MemAvailable: 9517064 kB' 'Buffers: 2436 kB' 'Cached: 1750316 kB' 'SwapCached: 0 kB' 'Active: 456656 kB' 'Inactive: 1410296 kB' 'Active(anon): 124672 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116048 kB' 'Mapped: 47940 kB' 'Shmem: 10472 kB' 'KReclaimable: 62100 kB' 'Slab: 133792 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71692 kB' 'KernelStack: 6208 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.841 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.841 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.842 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.842 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.843 20:55:21 -- setup/common.sh@33 -- # echo 1024 00:04:07.843 20:55:21 -- setup/common.sh@33 -- # return 0 00:04:07.843 20:55:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.843 20:55:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.843 20:55:21 -- setup/hugepages.sh@27 -- # local node 00:04:07.843 20:55:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.843 20:55:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.843 20:55:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:07.843 20:55:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.843 20:55:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.843 20:55:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.843 20:55:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.843 20:55:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.843 20:55:21 -- setup/common.sh@18 -- # local node=0 00:04:07.843 20:55:21 -- setup/common.sh@19 -- # local var val 00:04:07.843 20:55:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.843 20:55:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.843 20:55:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.843 20:55:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.843 20:55:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.843 20:55:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7980672 kB' 'MemUsed: 4261300 kB' 'SwapCached: 0 kB' 'Active: 456800 kB' 'Inactive: 1410296 kB' 'Active(anon): 124816 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1410296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1752752 kB' 'Mapped: 47940 kB' 'AnonPages: 116220 kB' 'Shmem: 10472 kB' 'KernelStack: 6224 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62100 kB' 'Slab: 133792 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 71692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.843 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.843 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # continue 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.844 20:55:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.844 20:55:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.844 20:55:21 -- setup/common.sh@33 -- # echo 0 00:04:07.844 20:55:21 -- setup/common.sh@33 -- # return 0 00:04:07.844 node0=1024 expecting 1024 00:04:07.844 20:55:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.844 20:55:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.844 20:55:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.844 20:55:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.844 20:55:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.844 20:55:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.844 00:04:07.844 real 0m1.482s 00:04:07.844 user 0m0.693s 00:04:07.844 sys 0m0.820s 00:04:07.844 20:55:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.844 20:55:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.844 ************************************ 00:04:07.844 END TEST no_shrink_alloc 00:04:07.844 ************************************ 00:04:07.844 20:55:21 -- setup/hugepages.sh@217 -- # clear_hp 00:04:07.844 20:55:21 -- setup/hugepages.sh@37 -- # local node hp 00:04:07.844 20:55:21 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.844 20:55:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.844 20:55:21 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.844 20:55:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.845 20:55:21 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.845 20:55:21 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.845 20:55:21 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.845 00:04:07.845 real 0m6.424s 00:04:07.845 user 0m2.857s 00:04:07.845 sys 0m3.555s 00:04:07.845 ************************************ 00:04:07.845 END TEST hugepages 00:04:07.845 ************************************ 00:04:07.845 20:55:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.845 20:55:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.845 20:55:21 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:07.845 20:55:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.845 20:55:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.845 20:55:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.845 ************************************ 00:04:07.845 START TEST driver 00:04:07.845 ************************************ 00:04:07.845 20:55:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:08.104 * Looking for test storage... 00:04:08.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.104 20:55:21 -- setup/driver.sh@68 -- # setup reset 00:04:08.104 20:55:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.104 20:55:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.666 20:55:27 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:14.666 20:55:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.666 20:55:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.666 20:55:27 -- common/autotest_common.sh@10 -- # set +x 00:04:14.666 ************************************ 00:04:14.666 START TEST guess_driver 00:04:14.666 ************************************ 00:04:14.666 20:55:27 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:14.666 20:55:27 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:14.666 20:55:27 -- setup/driver.sh@47 -- # local fail=0 00:04:14.666 20:55:27 -- setup/driver.sh@49 -- # pick_driver 00:04:14.666 20:55:27 -- setup/driver.sh@36 -- # vfio 00:04:14.666 20:55:27 -- setup/driver.sh@21 -- # local iommu_grups 00:04:14.666 20:55:27 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:14.666 20:55:27 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:14.666 20:55:27 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:14.666 20:55:27 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:14.666 20:55:27 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:14.666 20:55:27 -- setup/driver.sh@32 -- # return 1 00:04:14.666 20:55:27 -- setup/driver.sh@38 -- # uio 00:04:14.666 20:55:27 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:14.666 20:55:27 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:14.666 20:55:27 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:14.666 20:55:27 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:14.666 20:55:27 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:14.666 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:14.666 20:55:27 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:14.666 Looking for driver=uio_pci_generic 00:04:14.666 20:55:27 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:14.666 20:55:27 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:14.666 20:55:27 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:14.666 20:55:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.666 20:55:27 -- setup/driver.sh@45 -- # setup output config 00:04:14.666 20:55:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.666 20:55:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.925 20:55:28 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:14.925 20:55:28 -- setup/driver.sh@58 -- # continue 00:04:14.925 20:55:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.184 20:55:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.184 20:55:28 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:15.184 20:55:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.184 20:55:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.184 20:55:28 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:15.184 20:55:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.184 20:55:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.184 20:55:28 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:15.184 20:55:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.184 20:55:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.184 20:55:29 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:15.184 20:55:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.184 20:55:29 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:15.184 20:55:29 -- setup/driver.sh@65 -- # setup reset 00:04:15.184 20:55:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.184 20:55:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.764 00:04:21.764 real 0m7.216s 00:04:21.764 user 0m0.843s 00:04:21.764 sys 0m1.520s 00:04:21.764 20:55:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.764 ************************************ 00:04:21.764 END TEST guess_driver 00:04:21.765 ************************************ 00:04:21.765 20:55:35 -- common/autotest_common.sh@10 -- # set +x 00:04:21.765 ************************************ 00:04:21.765 END TEST driver 00:04:21.765 ************************************ 00:04:21.765 00:04:21.765 real 0m13.333s 00:04:21.765 user 0m1.223s 00:04:21.765 sys 0m2.423s 00:04:21.765 20:55:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.765 20:55:35 -- common/autotest_common.sh@10 -- # set +x 00:04:21.765 20:55:35 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:21.765 20:55:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.765 20:55:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.765 20:55:35 -- common/autotest_common.sh@10 -- # set +x 00:04:21.765 ************************************ 00:04:21.765 START TEST devices 00:04:21.765 ************************************ 00:04:21.765 20:55:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:21.765 * Looking for test storage... 00:04:21.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:21.765 20:55:35 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:21.765 20:55:35 -- setup/devices.sh@192 -- # setup reset 00:04:21.765 20:55:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.765 20:55:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.701 20:55:36 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:22.702 20:55:36 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:22.702 20:55:36 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:22.702 20:55:36 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:22.702 20:55:36 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:22.702 20:55:36 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:04:22.702 20:55:36 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:04:22.702 20:55:36 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:22.702 20:55:36 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:22.702 20:55:36 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:22.702 20:55:36 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:22.702 20:55:36 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:22.702 20:55:36 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:22.702 20:55:36 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:22.702 20:55:36 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:22.702 20:55:36 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:22.702 20:55:36 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:22.702 20:55:36 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:22.702 20:55:36 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:22.702 20:55:36 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:22.702 20:55:36 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:22.702 20:55:36 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:22.702 20:55:36 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:22.702 20:55:36 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:22.702 20:55:36 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:22.702 20:55:36 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:22.702 20:55:36 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:22.702 20:55:36 -- setup/devices.sh@196 -- # blocks=() 00:04:22.702 20:55:36 -- setup/devices.sh@196 -- # declare -a blocks 00:04:22.702 20:55:36 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:22.702 20:55:36 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:22.702 20:55:36 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:22.702 20:55:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.702 20:55:36 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:22.702 20:55:36 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:22.702 20:55:36 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:22.702 20:55:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:22.702 20:55:36 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:22.702 20:55:36 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:22.702 20:55:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:22.702 No valid GPT data, bailing 00:04:22.702 20:55:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:22.702 20:55:36 -- scripts/common.sh@393 -- # pt= 00:04:22.702 20:55:36 -- scripts/common.sh@394 -- # return 1 00:04:22.702 20:55:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:22.702 20:55:36 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:22.702 20:55:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:22.702 20:55:36 -- setup/common.sh@80 -- # echo 1073741824 00:04:22.702 20:55:36 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:22.702 20:55:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.702 20:55:36 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:22.702 20:55:36 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:22.702 20:55:36 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:22.702 20:55:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:22.702 20:55:36 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:22.702 20:55:36 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:22.702 20:55:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:22.702 No valid GPT data, bailing 00:04:22.702 20:55:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:22.702 20:55:36 -- scripts/common.sh@393 -- # pt= 00:04:22.702 20:55:36 -- scripts/common.sh@394 -- # return 1 00:04:22.702 20:55:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:22.702 20:55:36 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:22.702 20:55:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:22.702 20:55:36 -- setup/common.sh@80 -- # echo 4294967296 00:04:22.702 20:55:36 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:22.702 20:55:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.702 20:55:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:22.702 20:55:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.702 20:55:36 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:22.702 20:55:36 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:22.702 20:55:36 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:22.702 20:55:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:22.702 20:55:36 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:22.702 20:55:36 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:22.702 20:55:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:22.702 No valid GPT data, bailing 00:04:22.702 20:55:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:22.702 20:55:36 -- scripts/common.sh@393 -- # pt= 00:04:22.702 20:55:36 -- scripts/common.sh@394 -- # return 1 00:04:22.702 20:55:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:22.702 20:55:36 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:22.702 20:55:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:22.702 20:55:36 -- setup/common.sh@80 -- # echo 4294967296 00:04:22.702 20:55:36 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:22.702 20:55:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.702 20:55:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:22.702 20:55:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.702 20:55:36 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:22.702 20:55:36 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:22.702 20:55:36 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:22.702 20:55:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:22.702 20:55:36 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:22.702 20:55:36 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:22.702 20:55:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:22.963 No valid GPT data, bailing 00:04:22.963 20:55:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:22.963 20:55:36 -- scripts/common.sh@393 -- # pt= 00:04:22.963 20:55:36 -- scripts/common.sh@394 -- # return 1 00:04:22.963 20:55:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:22.963 20:55:36 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:22.963 20:55:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:22.963 20:55:36 -- setup/common.sh@80 -- # echo 4294967296 00:04:22.963 20:55:36 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:22.963 20:55:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.963 20:55:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:22.963 20:55:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.963 20:55:36 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:22.963 20:55:36 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:22.963 20:55:36 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:22.963 20:55:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:22.963 20:55:36 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:22.963 20:55:36 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:22.963 20:55:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:22.963 No valid GPT data, bailing 00:04:22.963 20:55:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:22.963 20:55:36 -- scripts/common.sh@393 -- # pt= 00:04:22.963 20:55:36 -- scripts/common.sh@394 -- # return 1 00:04:22.963 20:55:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:22.963 20:55:36 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:22.963 20:55:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:22.963 20:55:36 -- setup/common.sh@80 -- # echo 6343335936 00:04:22.963 20:55:36 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:22.963 20:55:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.963 20:55:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:22.963 20:55:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.963 20:55:36 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:22.963 20:55:36 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:22.963 20:55:36 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:22.963 20:55:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:22.963 20:55:36 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:22.963 20:55:36 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:22.963 20:55:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:22.963 No valid GPT data, bailing 00:04:22.963 20:55:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:22.963 20:55:36 -- scripts/common.sh@393 -- # pt= 00:04:22.963 20:55:36 -- scripts/common.sh@394 -- # return 1 00:04:22.963 20:55:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:22.963 20:55:36 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:22.963 20:55:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:22.963 20:55:36 -- setup/common.sh@80 -- # echo 5368709120 00:04:22.963 20:55:36 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:22.963 20:55:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.963 20:55:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:22.963 20:55:36 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:22.963 20:55:36 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:22.963 20:55:36 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:22.963 20:55:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:22.963 20:55:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:22.963 20:55:36 -- common/autotest_common.sh@10 -- # set +x 00:04:22.963 ************************************ 00:04:22.963 START TEST nvme_mount 00:04:22.963 ************************************ 00:04:22.963 20:55:36 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:22.963 20:55:36 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:22.963 20:55:36 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:22.963 20:55:36 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.963 20:55:36 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.963 20:55:36 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:22.963 20:55:36 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:22.963 20:55:36 -- setup/common.sh@40 -- # local part_no=1 00:04:22.963 20:55:36 -- setup/common.sh@41 -- # local size=1073741824 00:04:22.963 20:55:36 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:22.963 20:55:36 -- setup/common.sh@44 -- # parts=() 00:04:22.963 20:55:36 -- setup/common.sh@44 -- # local parts 00:04:22.963 20:55:36 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:22.963 20:55:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.963 20:55:36 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.963 20:55:36 -- setup/common.sh@46 -- # (( part++ )) 00:04:22.963 20:55:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.963 20:55:36 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:22.963 20:55:36 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:22.963 20:55:36 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:24.341 Creating new GPT entries in memory. 00:04:24.341 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.341 other utilities. 00:04:24.341 20:55:37 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.341 20:55:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.341 20:55:37 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.341 20:55:37 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.341 20:55:37 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:25.279 Creating new GPT entries in memory. 00:04:25.279 The operation has completed successfully. 00:04:25.279 20:55:38 -- setup/common.sh@57 -- # (( part++ )) 00:04:25.279 20:55:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.279 20:55:38 -- setup/common.sh@62 -- # wait 53931 00:04:25.279 20:55:38 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.279 20:55:38 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:25.279 20:55:38 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.279 20:55:38 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:25.279 20:55:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:25.279 20:55:38 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.279 20:55:38 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.279 20:55:38 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:25.279 20:55:38 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:25.279 20:55:38 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.279 20:55:38 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.279 20:55:38 -- setup/devices.sh@53 -- # local found=0 00:04:25.279 20:55:38 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.279 20:55:38 -- setup/devices.sh@56 -- # : 00:04:25.279 20:55:38 -- setup/devices.sh@59 -- # local pci status 00:04:25.279 20:55:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.279 20:55:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:25.279 20:55:38 -- setup/devices.sh@47 -- # setup output config 00:04:25.279 20:55:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.279 20:55:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:25.279 20:55:39 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.279 20:55:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.537 20:55:39 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.537 20:55:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.537 20:55:39 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.537 20:55:39 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:25.537 20:55:39 -- setup/devices.sh@63 -- # found=1 00:04:25.537 20:55:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.537 20:55:39 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.537 20:55:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.797 20:55:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.797 20:55:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.797 20:55:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:25.797 20:55:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.056 20:55:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.056 20:55:39 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:26.056 20:55:39 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.056 20:55:39 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.056 20:55:39 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.056 20:55:39 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:26.056 20:55:39 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.056 20:55:39 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.056 20:55:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:26.056 20:55:39 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:26.056 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.056 20:55:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:26.056 20:55:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:26.315 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.315 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.315 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.315 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:26.315 20:55:40 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:26.315 20:55:40 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:26.315 20:55:40 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.315 20:55:40 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:26.315 20:55:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:26.315 20:55:40 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.315 20:55:40 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.315 20:55:40 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:26.315 20:55:40 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:26.316 20:55:40 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.316 20:55:40 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.316 20:55:40 -- setup/devices.sh@53 -- # local found=0 00:04:26.316 20:55:40 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.316 20:55:40 -- setup/devices.sh@56 -- # : 00:04:26.316 20:55:40 -- setup/devices.sh@59 -- # local pci status 00:04:26.316 20:55:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.316 20:55:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:26.316 20:55:40 -- setup/devices.sh@47 -- # setup output config 00:04:26.316 20:55:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.316 20:55:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.316 20:55:40 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:26.316 20:55:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.575 20:55:40 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:26.575 20:55:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.834 20:55:40 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:26.834 20:55:40 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:26.834 20:55:40 -- setup/devices.sh@63 -- # found=1 00:04:26.834 20:55:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.834 20:55:40 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:26.834 20:55:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.094 20:55:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.094 20:55:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.094 20:55:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.094 20:55:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.094 20:55:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.094 20:55:40 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:27.094 20:55:40 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.094 20:55:40 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.094 20:55:40 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:27.094 20:55:40 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.094 20:55:40 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:27.094 20:55:40 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:27.094 20:55:40 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:27.094 20:55:40 -- setup/devices.sh@50 -- # local mount_point= 00:04:27.094 20:55:40 -- setup/devices.sh@51 -- # local test_file= 00:04:27.094 20:55:40 -- setup/devices.sh@53 -- # local found=0 00:04:27.094 20:55:40 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:27.094 20:55:40 -- setup/devices.sh@59 -- # local pci status 00:04:27.094 20:55:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.094 20:55:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:27.094 20:55:40 -- setup/devices.sh@47 -- # setup output config 00:04:27.094 20:55:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.094 20:55:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.353 20:55:41 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.353 20:55:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.353 20:55:41 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.353 20:55:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.612 20:55:41 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.612 20:55:41 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:27.612 20:55:41 -- setup/devices.sh@63 -- # found=1 00:04:27.612 20:55:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.612 20:55:41 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.612 20:55:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.871 20:55:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.871 20:55:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.871 20:55:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.871 20:55:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.129 20:55:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.129 20:55:41 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:28.129 20:55:41 -- setup/devices.sh@68 -- # return 0 00:04:28.129 20:55:41 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:28.129 20:55:41 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.129 20:55:41 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:28.129 20:55:41 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:28.129 20:55:41 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:28.129 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.129 00:04:28.129 real 0m4.987s 00:04:28.129 user 0m1.183s 00:04:28.129 sys 0m1.509s 00:04:28.129 20:55:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.129 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:04:28.129 ************************************ 00:04:28.129 END TEST nvme_mount 00:04:28.130 ************************************ 00:04:28.130 20:55:41 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:28.130 20:55:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.130 20:55:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.130 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:04:28.130 ************************************ 00:04:28.130 START TEST dm_mount 00:04:28.130 ************************************ 00:04:28.130 20:55:41 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:28.130 20:55:41 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:28.130 20:55:41 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:28.130 20:55:41 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:28.130 20:55:41 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:28.130 20:55:41 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:28.130 20:55:41 -- setup/common.sh@40 -- # local part_no=2 00:04:28.130 20:55:41 -- setup/common.sh@41 -- # local size=1073741824 00:04:28.130 20:55:41 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:28.130 20:55:41 -- setup/common.sh@44 -- # parts=() 00:04:28.130 20:55:41 -- setup/common.sh@44 -- # local parts 00:04:28.130 20:55:41 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:28.130 20:55:41 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.130 20:55:41 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.130 20:55:41 -- setup/common.sh@46 -- # (( part++ )) 00:04:28.130 20:55:41 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.130 20:55:41 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.130 20:55:41 -- setup/common.sh@46 -- # (( part++ )) 00:04:28.130 20:55:41 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.130 20:55:41 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:28.130 20:55:41 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:28.130 20:55:41 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:29.067 Creating new GPT entries in memory. 00:04:29.067 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:29.067 other utilities. 00:04:29.067 20:55:42 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:29.067 20:55:42 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.067 20:55:42 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:29.067 20:55:42 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:29.067 20:55:42 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:30.443 Creating new GPT entries in memory. 00:04:30.443 The operation has completed successfully. 00:04:30.443 20:55:43 -- setup/common.sh@57 -- # (( part++ )) 00:04:30.443 20:55:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.443 20:55:43 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.443 20:55:43 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.443 20:55:43 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:31.379 The operation has completed successfully. 00:04:31.379 20:55:44 -- setup/common.sh@57 -- # (( part++ )) 00:04:31.379 20:55:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.379 20:55:44 -- setup/common.sh@62 -- # wait 54560 00:04:31.379 20:55:44 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:31.379 20:55:44 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.379 20:55:44 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.379 20:55:44 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:31.379 20:55:45 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:31.379 20:55:45 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.379 20:55:45 -- setup/devices.sh@161 -- # break 00:04:31.379 20:55:45 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.379 20:55:45 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:31.379 20:55:45 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:31.379 20:55:45 -- setup/devices.sh@166 -- # dm=dm-0 00:04:31.379 20:55:45 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:31.379 20:55:45 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:31.379 20:55:45 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.379 20:55:45 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:31.379 20:55:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.379 20:55:45 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.379 20:55:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:31.379 20:55:45 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.379 20:55:45 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.379 20:55:45 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:31.379 20:55:45 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:31.379 20:55:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.379 20:55:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.379 20:55:45 -- setup/devices.sh@53 -- # local found=0 00:04:31.379 20:55:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.379 20:55:45 -- setup/devices.sh@56 -- # : 00:04:31.379 20:55:45 -- setup/devices.sh@59 -- # local pci status 00:04:31.379 20:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.379 20:55:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:31.379 20:55:45 -- setup/devices.sh@47 -- # setup output config 00:04:31.379 20:55:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.379 20:55:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.379 20:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.379 20:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.638 20:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.638 20:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.638 20:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.638 20:55:45 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:31.638 20:55:45 -- setup/devices.sh@63 -- # found=1 00:04:31.638 20:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.638 20:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.638 20:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.896 20:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.896 20:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.896 20:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.896 20:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.155 20:55:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.155 20:55:45 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:32.155 20:55:45 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.155 20:55:45 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.155 20:55:45 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.155 20:55:45 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.155 20:55:45 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:32.155 20:55:45 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:32.155 20:55:45 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:32.155 20:55:45 -- setup/devices.sh@50 -- # local mount_point= 00:04:32.155 20:55:45 -- setup/devices.sh@51 -- # local test_file= 00:04:32.155 20:55:45 -- setup/devices.sh@53 -- # local found=0 00:04:32.155 20:55:45 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.155 20:55:45 -- setup/devices.sh@59 -- # local pci status 00:04:32.155 20:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.155 20:55:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:32.155 20:55:45 -- setup/devices.sh@47 -- # setup output config 00:04:32.155 20:55:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.155 20:55:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.155 20:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.155 20:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.413 20:55:46 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.413 20:55:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.671 20:55:46 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.671 20:55:46 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:32.671 20:55:46 -- setup/devices.sh@63 -- # found=1 00:04:32.671 20:55:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.671 20:55:46 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.671 20:55:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.671 20:55:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.671 20:55:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.930 20:55:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:32.930 20:55:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.930 20:55:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.930 20:55:46 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:32.930 20:55:46 -- setup/devices.sh@68 -- # return 0 00:04:32.930 20:55:46 -- setup/devices.sh@187 -- # cleanup_dm 00:04:32.930 20:55:46 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.930 20:55:46 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:32.930 20:55:46 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:32.930 20:55:46 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:32.930 20:55:46 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:32.930 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.930 20:55:46 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:32.930 20:55:46 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:32.930 00:04:32.930 real 0m4.880s 00:04:32.930 user 0m0.803s 00:04:32.930 sys 0m1.010s 00:04:32.930 20:55:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.930 ************************************ 00:04:32.930 END TEST dm_mount 00:04:32.930 20:55:46 -- common/autotest_common.sh@10 -- # set +x 00:04:32.930 ************************************ 00:04:32.930 20:55:46 -- setup/devices.sh@1 -- # cleanup 00:04:32.930 20:55:46 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:32.930 20:55:46 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.930 20:55:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:32.930 20:55:46 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:32.930 20:55:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:32.930 20:55:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:33.188 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:33.188 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:33.188 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:33.189 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:33.189 20:55:47 -- setup/devices.sh@12 -- # cleanup_dm 00:04:33.189 20:55:47 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.189 20:55:47 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:33.189 20:55:47 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:33.189 20:55:47 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:33.189 20:55:47 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:33.189 20:55:47 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:33.189 00:04:33.189 real 0m11.950s 00:04:33.189 user 0m2.900s 00:04:33.189 sys 0m3.370s 00:04:33.189 20:55:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.189 20:55:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.189 ************************************ 00:04:33.189 END TEST devices 00:04:33.189 ************************************ 00:04:33.448 00:04:33.448 real 0m43.153s 00:04:33.448 user 0m9.696s 00:04:33.448 sys 0m13.115s 00:04:33.448 20:55:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.448 20:55:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.448 ************************************ 00:04:33.448 END TEST setup.sh 00:04:33.448 ************************************ 00:04:33.448 20:55:47 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:33.448 Hugepages 00:04:33.448 node hugesize free / total 00:04:33.448 node0 1048576kB 0 / 0 00:04:33.448 node0 2048kB 2048 / 2048 00:04:33.448 00:04:33.448 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:33.707 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:33.707 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:33.707 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:33.707 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:33.965 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:33.965 20:55:47 -- spdk/autotest.sh@141 -- # uname -s 00:04:33.965 20:55:47 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:33.965 20:55:47 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:33.965 20:55:47 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.162 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.162 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.162 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.162 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.162 20:55:48 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:36.099 20:55:49 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:36.099 20:55:49 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:36.099 20:55:49 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:36.099 20:55:50 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:36.099 20:55:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:36.099 20:55:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:36.099 20:55:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.099 20:55:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:36.099 20:55:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:36.358 20:55:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:36.358 20:55:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:36.358 20:55:50 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:36.617 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.875 Waiting for block devices as requested 00:04:36.875 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:36.875 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:37.134 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:37.134 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.415 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:42.415 20:55:55 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:42.415 20:55:55 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:42.415 20:55:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:42.415 20:55:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:04:42.415 20:55:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:42.415 20:55:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:42.415 20:55:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme2 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:42.415 20:55:56 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:42.415 20:55:56 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:42.415 20:55:56 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1542 -- # continue 00:04:42.415 20:55:56 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:42.415 20:55:56 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:42.415 20:55:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:04:42.415 20:55:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:42.415 20:55:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:42.415 20:55:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:42.415 20:55:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:42.415 20:55:56 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme3 00:04:42.415 20:55:56 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme3 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme3 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:42.415 20:55:56 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:42.415 20:55:56 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme3 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:42.415 20:55:56 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1542 -- # continue 00:04:42.415 20:55:56 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:42.415 20:55:56 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:42.415 20:55:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:42.415 20:55:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:08.0/nvme/nvme 00:04:42.415 20:55:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:42.415 20:55:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:42.415 20:55:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:42.415 20:55:56 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:04:42.415 20:55:56 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:42.415 20:55:56 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:42.415 20:55:56 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:42.415 20:55:56 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1542 -- # continue 00:04:42.415 20:55:56 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:42.415 20:55:56 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:42.415 20:55:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:42.415 20:55:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:09.0/nvme/nvme 00:04:42.415 20:55:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:42.415 20:55:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:42.415 20:55:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:42.415 20:55:56 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:42.415 20:55:56 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:42.415 20:55:56 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:42.415 20:55:56 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:42.415 20:55:56 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:42.415 20:55:56 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:42.415 20:55:56 -- common/autotest_common.sh@1542 -- # continue 00:04:42.415 20:55:56 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:42.415 20:55:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:42.415 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.415 20:55:56 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:42.415 20:55:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:42.415 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.415 20:55:56 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.351 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.351 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.351 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.610 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.610 20:55:57 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:43.610 20:55:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:43.610 20:55:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.610 20:55:57 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:43.610 20:55:57 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:43.610 20:55:57 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.610 20:55:57 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:43.610 20:55:57 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:43.610 20:55:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:43.610 20:55:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:43.610 20:55:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:43.610 20:55:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.610 20:55:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:43.610 20:55:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:43.610 20:55:57 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:43.610 20:55:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:43.610 20:55:57 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:43.610 20:55:57 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:43.610 20:55:57 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:43.610 20:55:57 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.610 20:55:57 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:43.610 20:55:57 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:43.610 20:55:57 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:43.610 20:55:57 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.610 20:55:57 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:43.610 20:55:57 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:43.610 20:55:57 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:43.610 20:55:57 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.610 20:55:57 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:43.610 20:55:57 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:43.610 20:55:57 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:43.610 20:55:57 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.610 20:55:57 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:43.610 20:55:57 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:43.610 20:55:57 -- common/autotest_common.sh@1578 -- # return 0 00:04:43.610 20:55:57 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:43.610 20:55:57 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:43.610 20:55:57 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:43.610 20:55:57 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:43.610 20:55:57 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:43.610 20:55:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:43.610 20:55:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.610 20:55:57 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:43.610 20:55:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.610 20:55:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.610 20:55:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.869 ************************************ 00:04:43.870 START TEST env 00:04:43.870 ************************************ 00:04:43.870 20:55:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:43.870 * Looking for test storage... 00:04:43.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:43.870 20:55:57 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:43.870 20:55:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.870 20:55:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.870 20:55:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.870 ************************************ 00:04:43.870 START TEST env_memory 00:04:43.870 ************************************ 00:04:43.870 20:55:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:43.870 00:04:43.870 00:04:43.870 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.870 http://cunit.sourceforge.net/ 00:04:43.870 00:04:43.870 00:04:43.870 Suite: memory 00:04:43.870 Test: alloc and free memory map ...[2024-07-13 20:55:57.707615] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:43.870 passed 00:04:43.870 Test: mem map translation ...[2024-07-13 20:55:57.769225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:43.870 [2024-07-13 20:55:57.769494] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:43.870 [2024-07-13 20:55:57.769782] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:43.870 [2024-07-13 20:55:57.769998] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:44.129 passed 00:04:44.129 Test: mem map registration ...[2024-07-13 20:55:57.868810] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:44.129 [2024-07-13 20:55:57.869077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:44.129 passed 00:04:44.129 Test: mem map adjacent registrations ...passed 00:04:44.129 00:04:44.129 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.129 suites 1 1 n/a 0 0 00:04:44.129 tests 4 4 4 0 0 00:04:44.129 asserts 152 152 152 0 n/a 00:04:44.129 00:04:44.129 Elapsed time = 0.344 seconds 00:04:44.129 00:04:44.129 real 0m0.389s 00:04:44.129 user 0m0.349s 00:04:44.129 sys 0m0.030s 00:04:44.129 20:55:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.129 ************************************ 00:04:44.129 END TEST env_memory 00:04:44.129 ************************************ 00:04:44.129 20:55:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.389 20:55:58 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:44.389 20:55:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.389 20:55:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.389 20:55:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.389 ************************************ 00:04:44.389 START TEST env_vtophys 00:04:44.389 ************************************ 00:04:44.389 20:55:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:44.389 EAL: lib.eal log level changed from notice to debug 00:04:44.389 EAL: Detected lcore 0 as core 0 on socket 0 00:04:44.389 EAL: Detected lcore 1 as core 0 on socket 0 00:04:44.389 EAL: Detected lcore 2 as core 0 on socket 0 00:04:44.389 EAL: Detected lcore 3 as core 0 on socket 0 00:04:44.389 EAL: Detected lcore 4 as core 0 on socket 0 00:04:44.389 EAL: Detected lcore 5 as core 0 on socket 0 00:04:44.389 EAL: Detected lcore 6 as core 0 on socket 0 00:04:44.389 EAL: Detected lcore 7 as core 0 on socket 0 00:04:44.389 EAL: Detected lcore 8 as core 0 on socket 0 00:04:44.389 EAL: Detected lcore 9 as core 0 on socket 0 00:04:44.389 EAL: Maximum logical cores by configuration: 128 00:04:44.389 EAL: Detected CPU lcores: 10 00:04:44.389 EAL: Detected NUMA nodes: 1 00:04:44.389 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:44.389 EAL: Detected shared linkage of DPDK 00:04:44.389 EAL: No shared files mode enabled, IPC will be disabled 00:04:44.389 EAL: Selected IOVA mode 'PA' 00:04:44.389 EAL: Probing VFIO support... 00:04:44.389 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:44.389 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:44.389 EAL: Ask a virtual area of 0x2e000 bytes 00:04:44.389 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:44.389 EAL: Setting up physically contiguous memory... 00:04:44.389 EAL: Setting maximum number of open files to 524288 00:04:44.389 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:44.389 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:44.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.389 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:44.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.389 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:44.389 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:44.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.389 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:44.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.389 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:44.389 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:44.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.389 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:44.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.389 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:44.389 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:44.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.389 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:44.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.389 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:44.389 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:44.389 EAL: Hugepages will be freed exactly as allocated. 00:04:44.389 EAL: No shared files mode enabled, IPC is disabled 00:04:44.389 EAL: No shared files mode enabled, IPC is disabled 00:04:44.389 EAL: TSC frequency is ~2200000 KHz 00:04:44.389 EAL: Main lcore 0 is ready (tid=7f54a020fa40;cpuset=[0]) 00:04:44.389 EAL: Trying to obtain current memory policy. 00:04:44.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.389 EAL: Restoring previous memory policy: 0 00:04:44.389 EAL: request: mp_malloc_sync 00:04:44.389 EAL: No shared files mode enabled, IPC is disabled 00:04:44.389 EAL: Heap on socket 0 was expanded by 2MB 00:04:44.389 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:44.389 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:44.389 EAL: Mem event callback 'spdk:(nil)' registered 00:04:44.389 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:44.389 00:04:44.389 00:04:44.389 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.389 http://cunit.sourceforge.net/ 00:04:44.389 00:04:44.389 00:04:44.389 Suite: components_suite 00:04:44.958 Test: vtophys_malloc_test ...passed 00:04:44.958 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:44.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.958 EAL: Restoring previous memory policy: 4 00:04:44.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.958 EAL: request: mp_malloc_sync 00:04:44.958 EAL: No shared files mode enabled, IPC is disabled 00:04:44.958 EAL: Heap on socket 0 was expanded by 4MB 00:04:44.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.958 EAL: request: mp_malloc_sync 00:04:44.958 EAL: No shared files mode enabled, IPC is disabled 00:04:44.958 EAL: Heap on socket 0 was shrunk by 4MB 00:04:44.958 EAL: Trying to obtain current memory policy. 00:04:44.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.958 EAL: Restoring previous memory policy: 4 00:04:44.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.958 EAL: request: mp_malloc_sync 00:04:44.958 EAL: No shared files mode enabled, IPC is disabled 00:04:44.958 EAL: Heap on socket 0 was expanded by 6MB 00:04:44.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.958 EAL: request: mp_malloc_sync 00:04:44.958 EAL: No shared files mode enabled, IPC is disabled 00:04:44.958 EAL: Heap on socket 0 was shrunk by 6MB 00:04:44.958 EAL: Trying to obtain current memory policy. 00:04:44.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.958 EAL: Restoring previous memory policy: 4 00:04:44.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.958 EAL: request: mp_malloc_sync 00:04:44.958 EAL: No shared files mode enabled, IPC is disabled 00:04:44.958 EAL: Heap on socket 0 was expanded by 10MB 00:04:44.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.958 EAL: request: mp_malloc_sync 00:04:44.958 EAL: No shared files mode enabled, IPC is disabled 00:04:44.958 EAL: Heap on socket 0 was shrunk by 10MB 00:04:44.958 EAL: Trying to obtain current memory policy. 00:04:44.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.958 EAL: Restoring previous memory policy: 4 00:04:44.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.958 EAL: request: mp_malloc_sync 00:04:44.958 EAL: No shared files mode enabled, IPC is disabled 00:04:44.958 EAL: Heap on socket 0 was expanded by 18MB 00:04:44.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.958 EAL: request: mp_malloc_sync 00:04:44.958 EAL: No shared files mode enabled, IPC is disabled 00:04:44.958 EAL: Heap on socket 0 was shrunk by 18MB 00:04:44.958 EAL: Trying to obtain current memory policy. 00:04:44.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.958 EAL: Restoring previous memory policy: 4 00:04:44.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.958 EAL: request: mp_malloc_sync 00:04:44.958 EAL: No shared files mode enabled, IPC is disabled 00:04:44.958 EAL: Heap on socket 0 was expanded by 34MB 00:04:44.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.958 EAL: request: mp_malloc_sync 00:04:44.958 EAL: No shared files mode enabled, IPC is disabled 00:04:44.958 EAL: Heap on socket 0 was shrunk by 34MB 00:04:44.958 EAL: Trying to obtain current memory policy. 00:04:44.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.958 EAL: Restoring previous memory policy: 4 00:04:44.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.958 EAL: request: mp_malloc_sync 00:04:44.958 EAL: No shared files mode enabled, IPC is disabled 00:04:44.958 EAL: Heap on socket 0 was expanded by 66MB 00:04:45.218 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.218 EAL: request: mp_malloc_sync 00:04:45.218 EAL: No shared files mode enabled, IPC is disabled 00:04:45.218 EAL: Heap on socket 0 was shrunk by 66MB 00:04:45.218 EAL: Trying to obtain current memory policy. 00:04:45.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.218 EAL: Restoring previous memory policy: 4 00:04:45.218 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.218 EAL: request: mp_malloc_sync 00:04:45.218 EAL: No shared files mode enabled, IPC is disabled 00:04:45.218 EAL: Heap on socket 0 was expanded by 130MB 00:04:45.477 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.477 EAL: request: mp_malloc_sync 00:04:45.477 EAL: No shared files mode enabled, IPC is disabled 00:04:45.477 EAL: Heap on socket 0 was shrunk by 130MB 00:04:45.477 EAL: Trying to obtain current memory policy. 00:04:45.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.736 EAL: Restoring previous memory policy: 4 00:04:45.736 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.736 EAL: request: mp_malloc_sync 00:04:45.736 EAL: No shared files mode enabled, IPC is disabled 00:04:45.736 EAL: Heap on socket 0 was expanded by 258MB 00:04:45.995 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.995 EAL: request: mp_malloc_sync 00:04:45.995 EAL: No shared files mode enabled, IPC is disabled 00:04:45.995 EAL: Heap on socket 0 was shrunk by 258MB 00:04:46.254 EAL: Trying to obtain current memory policy. 00:04:46.254 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.254 EAL: Restoring previous memory policy: 4 00:04:46.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.254 EAL: request: mp_malloc_sync 00:04:46.254 EAL: No shared files mode enabled, IPC is disabled 00:04:46.254 EAL: Heap on socket 0 was expanded by 514MB 00:04:47.191 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.191 EAL: request: mp_malloc_sync 00:04:47.191 EAL: No shared files mode enabled, IPC is disabled 00:04:47.191 EAL: Heap on socket 0 was shrunk by 514MB 00:04:47.758 EAL: Trying to obtain current memory policy. 00:04:47.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.758 EAL: Restoring previous memory policy: 4 00:04:47.758 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.758 EAL: request: mp_malloc_sync 00:04:47.758 EAL: No shared files mode enabled, IPC is disabled 00:04:47.758 EAL: Heap on socket 0 was expanded by 1026MB 00:04:49.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.137 EAL: request: mp_malloc_sync 00:04:49.137 EAL: No shared files mode enabled, IPC is disabled 00:04:49.137 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:50.531 passed 00:04:50.531 00:04:50.531 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.531 suites 1 1 n/a 0 0 00:04:50.531 tests 2 2 2 0 0 00:04:50.531 asserts 5348 5348 5348 0 n/a 00:04:50.531 00:04:50.531 Elapsed time = 5.865 seconds 00:04:50.531 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.531 EAL: request: mp_malloc_sync 00:04:50.531 EAL: No shared files mode enabled, IPC is disabled 00:04:50.531 EAL: Heap on socket 0 was shrunk by 2MB 00:04:50.531 EAL: No shared files mode enabled, IPC is disabled 00:04:50.531 EAL: No shared files mode enabled, IPC is disabled 00:04:50.531 EAL: No shared files mode enabled, IPC is disabled 00:04:50.531 00:04:50.531 real 0m6.179s 00:04:50.531 user 0m5.372s 00:04:50.531 sys 0m0.653s 00:04:50.531 20:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.531 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:04:50.531 ************************************ 00:04:50.531 END TEST env_vtophys 00:04:50.531 ************************************ 00:04:50.531 20:56:04 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:50.531 20:56:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.531 20:56:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.531 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:04:50.531 ************************************ 00:04:50.531 START TEST env_pci 00:04:50.531 ************************************ 00:04:50.531 20:56:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:50.531 00:04:50.531 00:04:50.531 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.531 http://cunit.sourceforge.net/ 00:04:50.531 00:04:50.531 00:04:50.531 Suite: pci 00:04:50.531 Test: pci_hook ...[2024-07-13 20:56:04.340633] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56278 has claimed it 00:04:50.531 passed 00:04:50.531 00:04:50.531 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.531 suites 1 1 n/a 0 0 00:04:50.531 tests 1 1 1 0 0 00:04:50.531 asserts 25 25 25 0 n/a 00:04:50.531 00:04:50.531 Elapsed time = 0.009 seconds 00:04:50.531 EAL: Cannot find device (10000:00:01.0) 00:04:50.531 EAL: Failed to attach device on primary process 00:04:50.531 ************************************ 00:04:50.531 END TEST env_pci 00:04:50.531 ************************************ 00:04:50.531 00:04:50.531 real 0m0.081s 00:04:50.531 user 0m0.037s 00:04:50.531 sys 0m0.043s 00:04:50.531 20:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.531 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:04:50.531 20:56:04 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:50.531 20:56:04 -- env/env.sh@15 -- # uname 00:04:50.531 20:56:04 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:50.531 20:56:04 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:50.531 20:56:04 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.531 20:56:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:50.531 20:56:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.531 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:04:50.531 ************************************ 00:04:50.531 START TEST env_dpdk_post_init 00:04:50.531 ************************************ 00:04:50.531 20:56:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.791 EAL: Detected CPU lcores: 10 00:04:50.791 EAL: Detected NUMA nodes: 1 00:04:50.791 EAL: Detected shared linkage of DPDK 00:04:50.791 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:50.791 EAL: Selected IOVA mode 'PA' 00:04:50.791 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:50.791 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:50.791 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:50.791 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:50.791 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:50.791 Starting DPDK initialization... 00:04:50.791 Starting SPDK post initialization... 00:04:50.791 SPDK NVMe probe 00:04:50.791 Attaching to 0000:00:06.0 00:04:50.791 Attaching to 0000:00:07.0 00:04:50.791 Attaching to 0000:00:08.0 00:04:50.791 Attaching to 0000:00:09.0 00:04:50.791 Attached to 0000:00:06.0 00:04:50.791 Attached to 0000:00:07.0 00:04:50.791 Attached to 0000:00:09.0 00:04:50.791 Attached to 0000:00:08.0 00:04:50.791 Cleaning up... 00:04:51.050 ************************************ 00:04:51.050 END TEST env_dpdk_post_init 00:04:51.050 ************************************ 00:04:51.050 00:04:51.050 real 0m0.274s 00:04:51.050 user 0m0.098s 00:04:51.050 sys 0m0.079s 00:04:51.050 20:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.050 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:04:51.050 20:56:04 -- env/env.sh@26 -- # uname 00:04:51.050 20:56:04 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:51.050 20:56:04 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.050 20:56:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.050 20:56:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.050 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:04:51.050 ************************************ 00:04:51.050 START TEST env_mem_callbacks 00:04:51.050 ************************************ 00:04:51.050 20:56:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.050 EAL: Detected CPU lcores: 10 00:04:51.050 EAL: Detected NUMA nodes: 1 00:04:51.050 EAL: Detected shared linkage of DPDK 00:04:51.050 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.050 EAL: Selected IOVA mode 'PA' 00:04:51.050 00:04:51.050 00:04:51.050 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.050 http://cunit.sourceforge.net/ 00:04:51.050 00:04:51.050 00:04:51.050 Suite: memory 00:04:51.050 Test: test ... 00:04:51.050 register 0x200000200000 2097152 00:04:51.050 malloc 3145728 00:04:51.050 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.050 register 0x200000400000 4194304 00:04:51.050 buf 0x2000004fffc0 len 3145728 PASSED 00:04:51.050 malloc 64 00:04:51.050 buf 0x2000004ffec0 len 64 PASSED 00:04:51.050 malloc 4194304 00:04:51.050 register 0x200000800000 6291456 00:04:51.050 buf 0x2000009fffc0 len 4194304 PASSED 00:04:51.050 free 0x2000004fffc0 3145728 00:04:51.309 free 0x2000004ffec0 64 00:04:51.309 unregister 0x200000400000 4194304 PASSED 00:04:51.309 free 0x2000009fffc0 4194304 00:04:51.309 unregister 0x200000800000 6291456 PASSED 00:04:51.309 malloc 8388608 00:04:51.309 register 0x200000400000 10485760 00:04:51.309 buf 0x2000005fffc0 len 8388608 PASSED 00:04:51.309 free 0x2000005fffc0 8388608 00:04:51.309 unregister 0x200000400000 10485760 PASSED 00:04:51.309 passed 00:04:51.309 00:04:51.309 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.309 suites 1 1 n/a 0 0 00:04:51.309 tests 1 1 1 0 0 00:04:51.309 asserts 15 15 15 0 n/a 00:04:51.309 00:04:51.309 Elapsed time = 0.054 seconds 00:04:51.309 ************************************ 00:04:51.309 END TEST env_mem_callbacks 00:04:51.309 ************************************ 00:04:51.309 00:04:51.309 real 0m0.252s 00:04:51.309 user 0m0.097s 00:04:51.309 sys 0m0.051s 00:04:51.309 20:56:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.309 20:56:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.309 00:04:51.309 real 0m7.532s 00:04:51.309 user 0m6.053s 00:04:51.309 sys 0m1.082s 00:04:51.309 20:56:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.309 ************************************ 00:04:51.309 END TEST env 00:04:51.309 ************************************ 00:04:51.309 20:56:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.309 20:56:05 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:51.309 20:56:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.309 20:56:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.309 20:56:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.309 ************************************ 00:04:51.309 START TEST rpc 00:04:51.309 ************************************ 00:04:51.309 20:56:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:51.309 * Looking for test storage... 00:04:51.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:51.309 20:56:05 -- rpc/rpc.sh@65 -- # spdk_pid=56396 00:04:51.309 20:56:05 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.309 20:56:05 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:51.309 20:56:05 -- rpc/rpc.sh@67 -- # waitforlisten 56396 00:04:51.309 20:56:05 -- common/autotest_common.sh@819 -- # '[' -z 56396 ']' 00:04:51.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.309 20:56:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.309 20:56:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:51.309 20:56:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.309 20:56:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:51.309 20:56:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.568 [2024-07-13 20:56:05.305989] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:51.568 [2024-07-13 20:56:05.306150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56396 ] 00:04:51.568 [2024-07-13 20:56:05.476971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.827 [2024-07-13 20:56:05.644744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:51.827 [2024-07-13 20:56:05.644989] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:51.827 [2024-07-13 20:56:05.645014] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56396' to capture a snapshot of events at runtime. 00:04:51.827 [2024-07-13 20:56:05.645027] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56396 for offline analysis/debug. 00:04:51.827 [2024-07-13 20:56:05.645062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.227 20:56:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:53.227 20:56:06 -- common/autotest_common.sh@852 -- # return 0 00:04:53.227 20:56:06 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:53.227 20:56:06 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:53.227 20:56:06 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:53.227 20:56:06 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:53.227 20:56:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.227 20:56:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.227 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:04:53.227 ************************************ 00:04:53.227 START TEST rpc_integrity 00:04:53.227 ************************************ 00:04:53.227 20:56:06 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:53.227 20:56:06 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:53.227 20:56:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.227 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:04:53.227 20:56:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.227 20:56:06 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:53.227 20:56:06 -- rpc/rpc.sh@13 -- # jq length 00:04:53.227 20:56:07 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:53.227 20:56:07 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:53.227 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.227 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.227 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.227 20:56:07 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:53.227 20:56:07 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:53.227 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.227 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.227 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.227 20:56:07 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:53.227 { 00:04:53.227 "name": "Malloc0", 00:04:53.227 "aliases": [ 00:04:53.227 "9c0d9fc0-0157-4778-90b7-02814e5fe4f0" 00:04:53.227 ], 00:04:53.227 "product_name": "Malloc disk", 00:04:53.227 "block_size": 512, 00:04:53.227 "num_blocks": 16384, 00:04:53.227 "uuid": "9c0d9fc0-0157-4778-90b7-02814e5fe4f0", 00:04:53.227 "assigned_rate_limits": { 00:04:53.227 "rw_ios_per_sec": 0, 00:04:53.227 "rw_mbytes_per_sec": 0, 00:04:53.227 "r_mbytes_per_sec": 0, 00:04:53.227 "w_mbytes_per_sec": 0 00:04:53.227 }, 00:04:53.227 "claimed": false, 00:04:53.227 "zoned": false, 00:04:53.227 "supported_io_types": { 00:04:53.227 "read": true, 00:04:53.227 "write": true, 00:04:53.227 "unmap": true, 00:04:53.227 "write_zeroes": true, 00:04:53.227 "flush": true, 00:04:53.227 "reset": true, 00:04:53.227 "compare": false, 00:04:53.227 "compare_and_write": false, 00:04:53.227 "abort": true, 00:04:53.227 "nvme_admin": false, 00:04:53.227 "nvme_io": false 00:04:53.227 }, 00:04:53.228 "memory_domains": [ 00:04:53.228 { 00:04:53.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.228 "dma_device_type": 2 00:04:53.228 } 00:04:53.228 ], 00:04:53.228 "driver_specific": {} 00:04:53.228 } 00:04:53.228 ]' 00:04:53.228 20:56:07 -- rpc/rpc.sh@17 -- # jq length 00:04:53.228 20:56:07 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:53.228 20:56:07 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:53.228 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.228 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.228 [2024-07-13 20:56:07.116445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:53.228 [2024-07-13 20:56:07.116541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:53.228 [2024-07-13 20:56:07.116584] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:53.228 [2024-07-13 20:56:07.116607] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:53.228 [2024-07-13 20:56:07.119363] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:53.228 [2024-07-13 20:56:07.119425] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:53.228 Passthru0 00:04:53.228 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.228 20:56:07 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:53.228 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.228 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.228 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.228 20:56:07 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:53.228 { 00:04:53.228 "name": "Malloc0", 00:04:53.228 "aliases": [ 00:04:53.228 "9c0d9fc0-0157-4778-90b7-02814e5fe4f0" 00:04:53.228 ], 00:04:53.228 "product_name": "Malloc disk", 00:04:53.228 "block_size": 512, 00:04:53.228 "num_blocks": 16384, 00:04:53.228 "uuid": "9c0d9fc0-0157-4778-90b7-02814e5fe4f0", 00:04:53.228 "assigned_rate_limits": { 00:04:53.228 "rw_ios_per_sec": 0, 00:04:53.228 "rw_mbytes_per_sec": 0, 00:04:53.228 "r_mbytes_per_sec": 0, 00:04:53.228 "w_mbytes_per_sec": 0 00:04:53.228 }, 00:04:53.228 "claimed": true, 00:04:53.228 "claim_type": "exclusive_write", 00:04:53.228 "zoned": false, 00:04:53.228 "supported_io_types": { 00:04:53.228 "read": true, 00:04:53.228 "write": true, 00:04:53.228 "unmap": true, 00:04:53.228 "write_zeroes": true, 00:04:53.228 "flush": true, 00:04:53.228 "reset": true, 00:04:53.228 "compare": false, 00:04:53.228 "compare_and_write": false, 00:04:53.228 "abort": true, 00:04:53.228 "nvme_admin": false, 00:04:53.228 "nvme_io": false 00:04:53.228 }, 00:04:53.228 "memory_domains": [ 00:04:53.228 { 00:04:53.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.228 "dma_device_type": 2 00:04:53.228 } 00:04:53.228 ], 00:04:53.228 "driver_specific": {} 00:04:53.228 }, 00:04:53.228 { 00:04:53.228 "name": "Passthru0", 00:04:53.228 "aliases": [ 00:04:53.228 "6a09e693-bad5-5d9c-900c-fdfb1afd5aca" 00:04:53.228 ], 00:04:53.228 "product_name": "passthru", 00:04:53.228 "block_size": 512, 00:04:53.228 "num_blocks": 16384, 00:04:53.228 "uuid": "6a09e693-bad5-5d9c-900c-fdfb1afd5aca", 00:04:53.228 "assigned_rate_limits": { 00:04:53.228 "rw_ios_per_sec": 0, 00:04:53.228 "rw_mbytes_per_sec": 0, 00:04:53.228 "r_mbytes_per_sec": 0, 00:04:53.228 "w_mbytes_per_sec": 0 00:04:53.228 }, 00:04:53.228 "claimed": false, 00:04:53.228 "zoned": false, 00:04:53.228 "supported_io_types": { 00:04:53.228 "read": true, 00:04:53.228 "write": true, 00:04:53.228 "unmap": true, 00:04:53.228 "write_zeroes": true, 00:04:53.228 "flush": true, 00:04:53.228 "reset": true, 00:04:53.228 "compare": false, 00:04:53.228 "compare_and_write": false, 00:04:53.228 "abort": true, 00:04:53.228 "nvme_admin": false, 00:04:53.228 "nvme_io": false 00:04:53.228 }, 00:04:53.228 "memory_domains": [ 00:04:53.228 { 00:04:53.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.228 "dma_device_type": 2 00:04:53.228 } 00:04:53.228 ], 00:04:53.228 "driver_specific": { 00:04:53.228 "passthru": { 00:04:53.228 "name": "Passthru0", 00:04:53.228 "base_bdev_name": "Malloc0" 00:04:53.228 } 00:04:53.228 } 00:04:53.228 } 00:04:53.228 ]' 00:04:53.228 20:56:07 -- rpc/rpc.sh@21 -- # jq length 00:04:53.487 20:56:07 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:53.487 20:56:07 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:53.487 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.487 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.487 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.487 20:56:07 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:53.487 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.487 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.487 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.487 20:56:07 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:53.487 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.487 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.487 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.487 20:56:07 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:53.487 20:56:07 -- rpc/rpc.sh@26 -- # jq length 00:04:53.487 20:56:07 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:53.487 00:04:53.487 real 0m0.332s 00:04:53.487 user 0m0.198s 00:04:53.487 sys 0m0.045s 00:04:53.487 20:56:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.487 ************************************ 00:04:53.487 END TEST rpc_integrity 00:04:53.487 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.487 ************************************ 00:04:53.487 20:56:07 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:53.487 20:56:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.487 20:56:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.487 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.487 ************************************ 00:04:53.487 START TEST rpc_plugins 00:04:53.487 ************************************ 00:04:53.487 20:56:07 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:53.487 20:56:07 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:53.487 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.487 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.487 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.487 20:56:07 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:53.487 20:56:07 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:53.487 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.487 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.487 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.487 20:56:07 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:53.487 { 00:04:53.487 "name": "Malloc1", 00:04:53.487 "aliases": [ 00:04:53.487 "bbd2def0-dee1-4b7f-bef6-6b28df92d1b2" 00:04:53.487 ], 00:04:53.487 "product_name": "Malloc disk", 00:04:53.487 "block_size": 4096, 00:04:53.487 "num_blocks": 256, 00:04:53.487 "uuid": "bbd2def0-dee1-4b7f-bef6-6b28df92d1b2", 00:04:53.487 "assigned_rate_limits": { 00:04:53.487 "rw_ios_per_sec": 0, 00:04:53.487 "rw_mbytes_per_sec": 0, 00:04:53.487 "r_mbytes_per_sec": 0, 00:04:53.487 "w_mbytes_per_sec": 0 00:04:53.487 }, 00:04:53.487 "claimed": false, 00:04:53.487 "zoned": false, 00:04:53.487 "supported_io_types": { 00:04:53.487 "read": true, 00:04:53.487 "write": true, 00:04:53.487 "unmap": true, 00:04:53.487 "write_zeroes": true, 00:04:53.487 "flush": true, 00:04:53.487 "reset": true, 00:04:53.487 "compare": false, 00:04:53.487 "compare_and_write": false, 00:04:53.487 "abort": true, 00:04:53.487 "nvme_admin": false, 00:04:53.487 "nvme_io": false 00:04:53.487 }, 00:04:53.487 "memory_domains": [ 00:04:53.487 { 00:04:53.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.487 "dma_device_type": 2 00:04:53.487 } 00:04:53.487 ], 00:04:53.487 "driver_specific": {} 00:04:53.487 } 00:04:53.487 ]' 00:04:53.487 20:56:07 -- rpc/rpc.sh@32 -- # jq length 00:04:53.746 20:56:07 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:53.746 20:56:07 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:53.746 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.746 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.746 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.746 20:56:07 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:53.746 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.746 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.746 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.746 20:56:07 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:53.746 20:56:07 -- rpc/rpc.sh@36 -- # jq length 00:04:53.746 20:56:07 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:53.746 00:04:53.746 real 0m0.159s 00:04:53.746 user 0m0.111s 00:04:53.746 sys 0m0.011s 00:04:53.746 20:56:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.746 ************************************ 00:04:53.746 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.746 END TEST rpc_plugins 00:04:53.746 ************************************ 00:04:53.746 20:56:07 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:53.746 20:56:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.746 20:56:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.746 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.746 ************************************ 00:04:53.746 START TEST rpc_trace_cmd_test 00:04:53.746 ************************************ 00:04:53.746 20:56:07 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:53.746 20:56:07 -- rpc/rpc.sh@40 -- # local info 00:04:53.746 20:56:07 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:53.746 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.746 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.746 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.746 20:56:07 -- rpc/rpc.sh@42 -- # info='{ 00:04:53.746 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56396", 00:04:53.746 "tpoint_group_mask": "0x8", 00:04:53.746 "iscsi_conn": { 00:04:53.746 "mask": "0x2", 00:04:53.746 "tpoint_mask": "0x0" 00:04:53.746 }, 00:04:53.746 "scsi": { 00:04:53.746 "mask": "0x4", 00:04:53.746 "tpoint_mask": "0x0" 00:04:53.746 }, 00:04:53.746 "bdev": { 00:04:53.746 "mask": "0x8", 00:04:53.746 "tpoint_mask": "0xffffffffffffffff" 00:04:53.746 }, 00:04:53.746 "nvmf_rdma": { 00:04:53.746 "mask": "0x10", 00:04:53.746 "tpoint_mask": "0x0" 00:04:53.746 }, 00:04:53.746 "nvmf_tcp": { 00:04:53.746 "mask": "0x20", 00:04:53.746 "tpoint_mask": "0x0" 00:04:53.746 }, 00:04:53.746 "ftl": { 00:04:53.746 "mask": "0x40", 00:04:53.746 "tpoint_mask": "0x0" 00:04:53.746 }, 00:04:53.746 "blobfs": { 00:04:53.746 "mask": "0x80", 00:04:53.746 "tpoint_mask": "0x0" 00:04:53.746 }, 00:04:53.746 "dsa": { 00:04:53.746 "mask": "0x200", 00:04:53.746 "tpoint_mask": "0x0" 00:04:53.746 }, 00:04:53.746 "thread": { 00:04:53.746 "mask": "0x400", 00:04:53.746 "tpoint_mask": "0x0" 00:04:53.746 }, 00:04:53.746 "nvme_pcie": { 00:04:53.746 "mask": "0x800", 00:04:53.746 "tpoint_mask": "0x0" 00:04:53.746 }, 00:04:53.746 "iaa": { 00:04:53.746 "mask": "0x1000", 00:04:53.746 "tpoint_mask": "0x0" 00:04:53.746 }, 00:04:53.746 "nvme_tcp": { 00:04:53.746 "mask": "0x2000", 00:04:53.746 "tpoint_mask": "0x0" 00:04:53.746 }, 00:04:53.746 "bdev_nvme": { 00:04:53.746 "mask": "0x4000", 00:04:53.746 "tpoint_mask": "0x0" 00:04:53.746 } 00:04:53.746 }' 00:04:53.746 20:56:07 -- rpc/rpc.sh@43 -- # jq length 00:04:53.746 20:56:07 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:53.747 20:56:07 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:54.007 20:56:07 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:54.007 20:56:07 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:54.007 20:56:07 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:54.007 20:56:07 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:54.007 20:56:07 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:54.007 20:56:07 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:54.007 20:56:07 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:54.007 00:04:54.007 real 0m0.270s 00:04:54.007 user 0m0.241s 00:04:54.007 sys 0m0.019s 00:04:54.007 20:56:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.007 ************************************ 00:04:54.007 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:54.007 END TEST rpc_trace_cmd_test 00:04:54.007 ************************************ 00:04:54.007 20:56:07 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:54.007 20:56:07 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:54.007 20:56:07 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:54.007 20:56:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.007 20:56:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.007 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:54.007 ************************************ 00:04:54.007 START TEST rpc_daemon_integrity 00:04:54.007 ************************************ 00:04:54.007 20:56:07 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:54.007 20:56:07 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:54.007 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:54.007 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:54.007 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:54.007 20:56:07 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:54.007 20:56:07 -- rpc/rpc.sh@13 -- # jq length 00:04:54.266 20:56:07 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:54.266 20:56:07 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:54.266 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:54.266 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:54.266 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:54.266 20:56:07 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:54.266 20:56:07 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:54.266 20:56:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:54.266 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:54.266 20:56:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:54.266 20:56:07 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:54.266 { 00:04:54.266 "name": "Malloc2", 00:04:54.266 "aliases": [ 00:04:54.266 "c0f04001-fcf7-402e-828a-892f496d498c" 00:04:54.266 ], 00:04:54.266 "product_name": "Malloc disk", 00:04:54.266 "block_size": 512, 00:04:54.266 "num_blocks": 16384, 00:04:54.266 "uuid": "c0f04001-fcf7-402e-828a-892f496d498c", 00:04:54.266 "assigned_rate_limits": { 00:04:54.266 "rw_ios_per_sec": 0, 00:04:54.266 "rw_mbytes_per_sec": 0, 00:04:54.266 "r_mbytes_per_sec": 0, 00:04:54.266 "w_mbytes_per_sec": 0 00:04:54.266 }, 00:04:54.266 "claimed": false, 00:04:54.266 "zoned": false, 00:04:54.266 "supported_io_types": { 00:04:54.266 "read": true, 00:04:54.266 "write": true, 00:04:54.267 "unmap": true, 00:04:54.267 "write_zeroes": true, 00:04:54.267 "flush": true, 00:04:54.267 "reset": true, 00:04:54.267 "compare": false, 00:04:54.267 "compare_and_write": false, 00:04:54.267 "abort": true, 00:04:54.267 "nvme_admin": false, 00:04:54.267 "nvme_io": false 00:04:54.267 }, 00:04:54.267 "memory_domains": [ 00:04:54.267 { 00:04:54.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.267 "dma_device_type": 2 00:04:54.267 } 00:04:54.267 ], 00:04:54.267 "driver_specific": {} 00:04:54.267 } 00:04:54.267 ]' 00:04:54.267 20:56:07 -- rpc/rpc.sh@17 -- # jq length 00:04:54.267 20:56:08 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:54.267 20:56:08 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:54.267 20:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:54.267 20:56:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.267 [2024-07-13 20:56:08.040272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:54.267 [2024-07-13 20:56:08.040368] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:54.267 [2024-07-13 20:56:08.040394] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:54.267 [2024-07-13 20:56:08.040411] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:54.267 [2024-07-13 20:56:08.043001] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:54.267 [2024-07-13 20:56:08.043082] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:54.267 Passthru0 00:04:54.267 20:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:54.267 20:56:08 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:54.267 20:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:54.267 20:56:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.267 20:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:54.267 20:56:08 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:54.267 { 00:04:54.267 "name": "Malloc2", 00:04:54.267 "aliases": [ 00:04:54.267 "c0f04001-fcf7-402e-828a-892f496d498c" 00:04:54.267 ], 00:04:54.267 "product_name": "Malloc disk", 00:04:54.267 "block_size": 512, 00:04:54.267 "num_blocks": 16384, 00:04:54.267 "uuid": "c0f04001-fcf7-402e-828a-892f496d498c", 00:04:54.267 "assigned_rate_limits": { 00:04:54.267 "rw_ios_per_sec": 0, 00:04:54.267 "rw_mbytes_per_sec": 0, 00:04:54.267 "r_mbytes_per_sec": 0, 00:04:54.267 "w_mbytes_per_sec": 0 00:04:54.267 }, 00:04:54.267 "claimed": true, 00:04:54.267 "claim_type": "exclusive_write", 00:04:54.267 "zoned": false, 00:04:54.267 "supported_io_types": { 00:04:54.267 "read": true, 00:04:54.267 "write": true, 00:04:54.267 "unmap": true, 00:04:54.267 "write_zeroes": true, 00:04:54.267 "flush": true, 00:04:54.267 "reset": true, 00:04:54.267 "compare": false, 00:04:54.267 "compare_and_write": false, 00:04:54.267 "abort": true, 00:04:54.267 "nvme_admin": false, 00:04:54.267 "nvme_io": false 00:04:54.267 }, 00:04:54.267 "memory_domains": [ 00:04:54.267 { 00:04:54.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.267 "dma_device_type": 2 00:04:54.267 } 00:04:54.267 ], 00:04:54.267 "driver_specific": {} 00:04:54.267 }, 00:04:54.267 { 00:04:54.267 "name": "Passthru0", 00:04:54.267 "aliases": [ 00:04:54.267 "85455b1d-245e-5097-be20-115b03e3370e" 00:04:54.267 ], 00:04:54.267 "product_name": "passthru", 00:04:54.267 "block_size": 512, 00:04:54.267 "num_blocks": 16384, 00:04:54.267 "uuid": "85455b1d-245e-5097-be20-115b03e3370e", 00:04:54.267 "assigned_rate_limits": { 00:04:54.267 "rw_ios_per_sec": 0, 00:04:54.267 "rw_mbytes_per_sec": 0, 00:04:54.267 "r_mbytes_per_sec": 0, 00:04:54.267 "w_mbytes_per_sec": 0 00:04:54.267 }, 00:04:54.267 "claimed": false, 00:04:54.267 "zoned": false, 00:04:54.267 "supported_io_types": { 00:04:54.267 "read": true, 00:04:54.267 "write": true, 00:04:54.267 "unmap": true, 00:04:54.267 "write_zeroes": true, 00:04:54.267 "flush": true, 00:04:54.267 "reset": true, 00:04:54.267 "compare": false, 00:04:54.267 "compare_and_write": false, 00:04:54.267 "abort": true, 00:04:54.267 "nvme_admin": false, 00:04:54.267 "nvme_io": false 00:04:54.267 }, 00:04:54.267 "memory_domains": [ 00:04:54.267 { 00:04:54.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.267 "dma_device_type": 2 00:04:54.267 } 00:04:54.267 ], 00:04:54.267 "driver_specific": { 00:04:54.267 "passthru": { 00:04:54.267 "name": "Passthru0", 00:04:54.267 "base_bdev_name": "Malloc2" 00:04:54.267 } 00:04:54.267 } 00:04:54.267 } 00:04:54.267 ]' 00:04:54.267 20:56:08 -- rpc/rpc.sh@21 -- # jq length 00:04:54.267 20:56:08 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:54.267 20:56:08 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:54.267 20:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:54.267 20:56:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.267 20:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:54.267 20:56:08 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:54.267 20:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:54.267 20:56:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.267 20:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:54.267 20:56:08 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:54.267 20:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:54.267 20:56:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.267 20:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:54.267 20:56:08 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:54.267 20:56:08 -- rpc/rpc.sh@26 -- # jq length 00:04:54.526 ************************************ 00:04:54.526 END TEST rpc_daemon_integrity 00:04:54.526 ************************************ 00:04:54.526 20:56:08 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:54.526 00:04:54.526 real 0m0.342s 00:04:54.526 user 0m0.213s 00:04:54.526 sys 0m0.042s 00:04:54.526 20:56:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.526 20:56:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.526 20:56:08 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:54.526 20:56:08 -- rpc/rpc.sh@84 -- # killprocess 56396 00:04:54.526 20:56:08 -- common/autotest_common.sh@926 -- # '[' -z 56396 ']' 00:04:54.526 20:56:08 -- common/autotest_common.sh@930 -- # kill -0 56396 00:04:54.526 20:56:08 -- common/autotest_common.sh@931 -- # uname 00:04:54.526 20:56:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:54.526 20:56:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56396 00:04:54.526 killing process with pid 56396 00:04:54.526 20:56:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:54.526 20:56:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:54.526 20:56:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56396' 00:04:54.526 20:56:08 -- common/autotest_common.sh@945 -- # kill 56396 00:04:54.526 20:56:08 -- common/autotest_common.sh@950 -- # wait 56396 00:04:56.432 00:04:56.432 real 0m4.973s 00:04:56.432 user 0m5.958s 00:04:56.432 sys 0m0.724s 00:04:56.432 20:56:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.432 ************************************ 00:04:56.432 END TEST rpc 00:04:56.432 ************************************ 00:04:56.432 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.432 20:56:10 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:56.432 20:56:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.432 20:56:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.432 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.432 ************************************ 00:04:56.432 START TEST rpc_client 00:04:56.432 ************************************ 00:04:56.432 20:56:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:56.432 * Looking for test storage... 00:04:56.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:56.432 20:56:10 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:56.432 OK 00:04:56.432 20:56:10 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:56.432 00:04:56.432 real 0m0.138s 00:04:56.432 user 0m0.071s 00:04:56.432 sys 0m0.072s 00:04:56.432 20:56:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.432 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.432 ************************************ 00:04:56.432 END TEST rpc_client 00:04:56.432 ************************************ 00:04:56.432 20:56:10 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:56.432 20:56:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.432 20:56:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.432 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.432 ************************************ 00:04:56.432 START TEST json_config 00:04:56.432 ************************************ 00:04:56.432 20:56:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:56.691 20:56:10 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.691 20:56:10 -- nvmf/common.sh@7 -- # uname -s 00:04:56.691 20:56:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.691 20:56:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.691 20:56:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.691 20:56:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.692 20:56:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.692 20:56:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.692 20:56:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.692 20:56:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.692 20:56:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.692 20:56:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.692 20:56:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ac8e35c3-2976-4d36-a627-b0337040b223 00:04:56.692 20:56:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=ac8e35c3-2976-4d36-a627-b0337040b223 00:04:56.692 20:56:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.692 20:56:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.692 20:56:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.692 20:56:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.692 20:56:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.692 20:56:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.692 20:56:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.692 20:56:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.692 20:56:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.692 20:56:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.692 20:56:10 -- paths/export.sh@5 -- # export PATH 00:04:56.692 20:56:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.692 20:56:10 -- nvmf/common.sh@46 -- # : 0 00:04:56.692 20:56:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:56.692 20:56:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:56.692 20:56:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:56.692 20:56:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.692 20:56:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.692 20:56:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:56.692 20:56:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:56.692 20:56:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:56.692 20:56:10 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:56.692 20:56:10 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:56.692 20:56:10 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:56.692 20:56:10 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:56.692 20:56:10 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:56.692 WARNING: No tests are enabled so not running JSON configuration tests 00:04:56.692 20:56:10 -- json_config/json_config.sh@27 -- # exit 0 00:04:56.692 00:04:56.692 real 0m0.076s 00:04:56.692 user 0m0.040s 00:04:56.692 sys 0m0.033s 00:04:56.692 20:56:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.692 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.692 ************************************ 00:04:56.692 END TEST json_config 00:04:56.692 ************************************ 00:04:56.692 20:56:10 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:56.692 20:56:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.692 20:56:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.692 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.692 ************************************ 00:04:56.692 START TEST json_config_extra_key 00:04:56.692 ************************************ 00:04:56.692 20:56:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.692 20:56:10 -- nvmf/common.sh@7 -- # uname -s 00:04:56.692 20:56:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.692 20:56:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.692 20:56:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.692 20:56:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.692 20:56:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.692 20:56:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.692 20:56:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.692 20:56:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.692 20:56:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.692 20:56:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.692 20:56:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ac8e35c3-2976-4d36-a627-b0337040b223 00:04:56.692 20:56:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=ac8e35c3-2976-4d36-a627-b0337040b223 00:04:56.692 20:56:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.692 20:56:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.692 20:56:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.692 20:56:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.692 20:56:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.692 20:56:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.692 20:56:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.692 20:56:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.692 20:56:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.692 20:56:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.692 20:56:10 -- paths/export.sh@5 -- # export PATH 00:04:56.692 20:56:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.692 20:56:10 -- nvmf/common.sh@46 -- # : 0 00:04:56.692 20:56:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:56.692 20:56:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:56.692 20:56:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:56.692 20:56:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.692 20:56:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.692 20:56:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:56.692 20:56:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:56.692 20:56:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:56.692 INFO: launching applications... 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56690 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:56.692 Waiting for target to run... 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:56.692 20:56:10 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56690 /var/tmp/spdk_tgt.sock 00:04:56.692 20:56:10 -- common/autotest_common.sh@819 -- # '[' -z 56690 ']' 00:04:56.692 20:56:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.692 20:56:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:56.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.692 20:56:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.692 20:56:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:56.692 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.953 [2024-07-13 20:56:10.662899] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:56.953 [2024-07-13 20:56:10.663504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56690 ] 00:04:57.210 [2024-07-13 20:56:11.009002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.467 [2024-07-13 20:56:11.209164] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.467 [2024-07-13 20:56:11.209368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.401 00:04:58.401 INFO: shutting down applications... 00:04:58.401 20:56:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.402 20:56:12 -- common/autotest_common.sh@852 -- # return 0 00:04:58.402 20:56:12 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:58.402 20:56:12 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:58.402 20:56:12 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:58.402 20:56:12 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:58.402 20:56:12 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:58.402 20:56:12 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56690 ]] 00:04:58.402 20:56:12 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56690 00:04:58.402 20:56:12 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:58.402 20:56:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:58.402 20:56:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56690 00:04:58.402 20:56:12 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:58.970 20:56:12 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:58.970 20:56:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:58.970 20:56:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56690 00:04:58.970 20:56:12 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:59.536 20:56:13 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:59.536 20:56:13 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:59.536 20:56:13 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56690 00:04:59.536 20:56:13 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:00.103 20:56:13 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:00.103 20:56:13 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:00.103 20:56:13 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56690 00:05:00.103 20:56:13 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:00.669 20:56:14 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:00.669 20:56:14 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:00.669 20:56:14 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56690 00:05:00.669 20:56:14 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:00.929 20:56:14 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:00.929 20:56:14 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:00.929 20:56:14 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56690 00:05:00.929 20:56:14 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:00.929 20:56:14 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:00.929 SPDK target shutdown done 00:05:00.929 Success 00:05:00.929 20:56:14 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:00.929 20:56:14 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:00.929 20:56:14 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:00.929 00:05:00.929 real 0m4.332s 00:05:00.929 user 0m4.041s 00:05:00.929 sys 0m0.480s 00:05:00.929 20:56:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.929 20:56:14 -- common/autotest_common.sh@10 -- # set +x 00:05:00.929 ************************************ 00:05:00.929 END TEST json_config_extra_key 00:05:00.929 ************************************ 00:05:00.929 20:56:14 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.929 20:56:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.929 20:56:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.929 20:56:14 -- common/autotest_common.sh@10 -- # set +x 00:05:00.929 ************************************ 00:05:00.929 START TEST alias_rpc 00:05:00.929 ************************************ 00:05:00.929 20:56:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.187 * Looking for test storage... 00:05:01.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:01.187 20:56:14 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:01.187 20:56:14 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56793 00:05:01.187 20:56:14 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56793 00:05:01.187 20:56:14 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.187 20:56:14 -- common/autotest_common.sh@819 -- # '[' -z 56793 ']' 00:05:01.187 20:56:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.187 20:56:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:01.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.187 20:56:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.187 20:56:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:01.187 20:56:14 -- common/autotest_common.sh@10 -- # set +x 00:05:01.187 [2024-07-13 20:56:15.042495] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:01.187 [2024-07-13 20:56:15.042690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56793 ] 00:05:01.445 [2024-07-13 20:56:15.213991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.704 [2024-07-13 20:56:15.367406] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:01.704 [2024-07-13 20:56:15.367646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.081 20:56:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:03.081 20:56:16 -- common/autotest_common.sh@852 -- # return 0 00:05:03.081 20:56:16 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:03.081 20:56:16 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56793 00:05:03.081 20:56:16 -- common/autotest_common.sh@926 -- # '[' -z 56793 ']' 00:05:03.081 20:56:16 -- common/autotest_common.sh@930 -- # kill -0 56793 00:05:03.081 20:56:16 -- common/autotest_common.sh@931 -- # uname 00:05:03.081 20:56:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:03.081 20:56:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56793 00:05:03.081 killing process with pid 56793 00:05:03.081 20:56:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:03.081 20:56:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:03.081 20:56:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56793' 00:05:03.081 20:56:16 -- common/autotest_common.sh@945 -- # kill 56793 00:05:03.081 20:56:16 -- common/autotest_common.sh@950 -- # wait 56793 00:05:04.984 00:05:04.984 real 0m3.989s 00:05:04.984 user 0m4.349s 00:05:04.984 sys 0m0.483s 00:05:04.984 20:56:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.984 20:56:18 -- common/autotest_common.sh@10 -- # set +x 00:05:04.984 ************************************ 00:05:04.984 END TEST alias_rpc 00:05:04.984 ************************************ 00:05:04.984 20:56:18 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:04.984 20:56:18 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.984 20:56:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.984 20:56:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.984 20:56:18 -- common/autotest_common.sh@10 -- # set +x 00:05:04.984 ************************************ 00:05:04.984 START TEST spdkcli_tcp 00:05:04.984 ************************************ 00:05:04.984 20:56:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:05.243 * Looking for test storage... 00:05:05.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:05.243 20:56:18 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:05.243 20:56:18 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:05.243 20:56:18 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:05.243 20:56:18 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:05.243 20:56:18 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:05.243 20:56:18 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:05.243 20:56:18 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:05.243 20:56:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:05.243 20:56:18 -- common/autotest_common.sh@10 -- # set +x 00:05:05.243 20:56:18 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56893 00:05:05.243 20:56:18 -- spdkcli/tcp.sh@27 -- # waitforlisten 56893 00:05:05.243 20:56:18 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:05.243 20:56:18 -- common/autotest_common.sh@819 -- # '[' -z 56893 ']' 00:05:05.243 20:56:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.243 20:56:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:05.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.243 20:56:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.243 20:56:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:05.243 20:56:18 -- common/autotest_common.sh@10 -- # set +x 00:05:05.243 [2024-07-13 20:56:19.071084] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:05.243 [2024-07-13 20:56:19.071234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56893 ] 00:05:05.501 [2024-07-13 20:56:19.229702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.501 [2024-07-13 20:56:19.401329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:05.501 [2024-07-13 20:56:19.401828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.501 [2024-07-13 20:56:19.401879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.875 20:56:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:06.875 20:56:20 -- common/autotest_common.sh@852 -- # return 0 00:05:06.875 20:56:20 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:06.875 20:56:20 -- spdkcli/tcp.sh@31 -- # socat_pid=56918 00:05:06.875 20:56:20 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:07.133 [ 00:05:07.133 "bdev_malloc_delete", 00:05:07.133 "bdev_malloc_create", 00:05:07.133 "bdev_null_resize", 00:05:07.133 "bdev_null_delete", 00:05:07.133 "bdev_null_create", 00:05:07.133 "bdev_nvme_cuse_unregister", 00:05:07.133 "bdev_nvme_cuse_register", 00:05:07.133 "bdev_opal_new_user", 00:05:07.133 "bdev_opal_set_lock_state", 00:05:07.133 "bdev_opal_delete", 00:05:07.133 "bdev_opal_get_info", 00:05:07.133 "bdev_opal_create", 00:05:07.133 "bdev_nvme_opal_revert", 00:05:07.133 "bdev_nvme_opal_init", 00:05:07.133 "bdev_nvme_send_cmd", 00:05:07.133 "bdev_nvme_get_path_iostat", 00:05:07.133 "bdev_nvme_get_mdns_discovery_info", 00:05:07.133 "bdev_nvme_stop_mdns_discovery", 00:05:07.133 "bdev_nvme_start_mdns_discovery", 00:05:07.133 "bdev_nvme_set_multipath_policy", 00:05:07.133 "bdev_nvme_set_preferred_path", 00:05:07.133 "bdev_nvme_get_io_paths", 00:05:07.133 "bdev_nvme_remove_error_injection", 00:05:07.133 "bdev_nvme_add_error_injection", 00:05:07.133 "bdev_nvme_get_discovery_info", 00:05:07.133 "bdev_nvme_stop_discovery", 00:05:07.133 "bdev_nvme_start_discovery", 00:05:07.133 "bdev_nvme_get_controller_health_info", 00:05:07.133 "bdev_nvme_disable_controller", 00:05:07.133 "bdev_nvme_enable_controller", 00:05:07.133 "bdev_nvme_reset_controller", 00:05:07.133 "bdev_nvme_get_transport_statistics", 00:05:07.133 "bdev_nvme_apply_firmware", 00:05:07.133 "bdev_nvme_detach_controller", 00:05:07.133 "bdev_nvme_get_controllers", 00:05:07.133 "bdev_nvme_attach_controller", 00:05:07.133 "bdev_nvme_set_hotplug", 00:05:07.133 "bdev_nvme_set_options", 00:05:07.133 "bdev_passthru_delete", 00:05:07.133 "bdev_passthru_create", 00:05:07.133 "bdev_lvol_grow_lvstore", 00:05:07.133 "bdev_lvol_get_lvols", 00:05:07.133 "bdev_lvol_get_lvstores", 00:05:07.133 "bdev_lvol_delete", 00:05:07.133 "bdev_lvol_set_read_only", 00:05:07.133 "bdev_lvol_resize", 00:05:07.133 "bdev_lvol_decouple_parent", 00:05:07.133 "bdev_lvol_inflate", 00:05:07.133 "bdev_lvol_rename", 00:05:07.133 "bdev_lvol_clone_bdev", 00:05:07.133 "bdev_lvol_clone", 00:05:07.133 "bdev_lvol_snapshot", 00:05:07.134 "bdev_lvol_create", 00:05:07.134 "bdev_lvol_delete_lvstore", 00:05:07.134 "bdev_lvol_rename_lvstore", 00:05:07.134 "bdev_lvol_create_lvstore", 00:05:07.134 "bdev_raid_set_options", 00:05:07.134 "bdev_raid_remove_base_bdev", 00:05:07.134 "bdev_raid_add_base_bdev", 00:05:07.134 "bdev_raid_delete", 00:05:07.134 "bdev_raid_create", 00:05:07.134 "bdev_raid_get_bdevs", 00:05:07.134 "bdev_error_inject_error", 00:05:07.134 "bdev_error_delete", 00:05:07.134 "bdev_error_create", 00:05:07.134 "bdev_split_delete", 00:05:07.134 "bdev_split_create", 00:05:07.134 "bdev_delay_delete", 00:05:07.134 "bdev_delay_create", 00:05:07.134 "bdev_delay_update_latency", 00:05:07.134 "bdev_zone_block_delete", 00:05:07.134 "bdev_zone_block_create", 00:05:07.134 "blobfs_create", 00:05:07.134 "blobfs_detect", 00:05:07.134 "blobfs_set_cache_size", 00:05:07.134 "bdev_xnvme_delete", 00:05:07.134 "bdev_xnvme_create", 00:05:07.134 "bdev_aio_delete", 00:05:07.134 "bdev_aio_rescan", 00:05:07.134 "bdev_aio_create", 00:05:07.134 "bdev_ftl_set_property", 00:05:07.134 "bdev_ftl_get_properties", 00:05:07.134 "bdev_ftl_get_stats", 00:05:07.134 "bdev_ftl_unmap", 00:05:07.134 "bdev_ftl_unload", 00:05:07.134 "bdev_ftl_delete", 00:05:07.134 "bdev_ftl_load", 00:05:07.134 "bdev_ftl_create", 00:05:07.134 "bdev_virtio_attach_controller", 00:05:07.134 "bdev_virtio_scsi_get_devices", 00:05:07.134 "bdev_virtio_detach_controller", 00:05:07.134 "bdev_virtio_blk_set_hotplug", 00:05:07.134 "bdev_iscsi_delete", 00:05:07.134 "bdev_iscsi_create", 00:05:07.134 "bdev_iscsi_set_options", 00:05:07.134 "accel_error_inject_error", 00:05:07.134 "ioat_scan_accel_module", 00:05:07.134 "dsa_scan_accel_module", 00:05:07.134 "iaa_scan_accel_module", 00:05:07.134 "iscsi_set_options", 00:05:07.134 "iscsi_get_auth_groups", 00:05:07.134 "iscsi_auth_group_remove_secret", 00:05:07.134 "iscsi_auth_group_add_secret", 00:05:07.134 "iscsi_delete_auth_group", 00:05:07.134 "iscsi_create_auth_group", 00:05:07.134 "iscsi_set_discovery_auth", 00:05:07.134 "iscsi_get_options", 00:05:07.134 "iscsi_target_node_request_logout", 00:05:07.134 "iscsi_target_node_set_redirect", 00:05:07.134 "iscsi_target_node_set_auth", 00:05:07.134 "iscsi_target_node_add_lun", 00:05:07.134 "iscsi_get_connections", 00:05:07.134 "iscsi_portal_group_set_auth", 00:05:07.134 "iscsi_start_portal_group", 00:05:07.134 "iscsi_delete_portal_group", 00:05:07.134 "iscsi_create_portal_group", 00:05:07.134 "iscsi_get_portal_groups", 00:05:07.134 "iscsi_delete_target_node", 00:05:07.134 "iscsi_target_node_remove_pg_ig_maps", 00:05:07.134 "iscsi_target_node_add_pg_ig_maps", 00:05:07.134 "iscsi_create_target_node", 00:05:07.134 "iscsi_get_target_nodes", 00:05:07.134 "iscsi_delete_initiator_group", 00:05:07.134 "iscsi_initiator_group_remove_initiators", 00:05:07.134 "iscsi_initiator_group_add_initiators", 00:05:07.134 "iscsi_create_initiator_group", 00:05:07.134 "iscsi_get_initiator_groups", 00:05:07.134 "nvmf_set_crdt", 00:05:07.134 "nvmf_set_config", 00:05:07.134 "nvmf_set_max_subsystems", 00:05:07.134 "nvmf_subsystem_get_listeners", 00:05:07.134 "nvmf_subsystem_get_qpairs", 00:05:07.134 "nvmf_subsystem_get_controllers", 00:05:07.134 "nvmf_get_stats", 00:05:07.134 "nvmf_get_transports", 00:05:07.134 "nvmf_create_transport", 00:05:07.134 "nvmf_get_targets", 00:05:07.134 "nvmf_delete_target", 00:05:07.134 "nvmf_create_target", 00:05:07.134 "nvmf_subsystem_allow_any_host", 00:05:07.134 "nvmf_subsystem_remove_host", 00:05:07.134 "nvmf_subsystem_add_host", 00:05:07.134 "nvmf_subsystem_remove_ns", 00:05:07.134 "nvmf_subsystem_add_ns", 00:05:07.134 "nvmf_subsystem_listener_set_ana_state", 00:05:07.134 "nvmf_discovery_get_referrals", 00:05:07.134 "nvmf_discovery_remove_referral", 00:05:07.134 "nvmf_discovery_add_referral", 00:05:07.134 "nvmf_subsystem_remove_listener", 00:05:07.134 "nvmf_subsystem_add_listener", 00:05:07.134 "nvmf_delete_subsystem", 00:05:07.134 "nvmf_create_subsystem", 00:05:07.134 "nvmf_get_subsystems", 00:05:07.134 "env_dpdk_get_mem_stats", 00:05:07.134 "nbd_get_disks", 00:05:07.134 "nbd_stop_disk", 00:05:07.134 "nbd_start_disk", 00:05:07.134 "ublk_recover_disk", 00:05:07.134 "ublk_get_disks", 00:05:07.134 "ublk_stop_disk", 00:05:07.134 "ublk_start_disk", 00:05:07.134 "ublk_destroy_target", 00:05:07.134 "ublk_create_target", 00:05:07.134 "virtio_blk_create_transport", 00:05:07.134 "virtio_blk_get_transports", 00:05:07.134 "vhost_controller_set_coalescing", 00:05:07.134 "vhost_get_controllers", 00:05:07.134 "vhost_delete_controller", 00:05:07.134 "vhost_create_blk_controller", 00:05:07.134 "vhost_scsi_controller_remove_target", 00:05:07.134 "vhost_scsi_controller_add_target", 00:05:07.134 "vhost_start_scsi_controller", 00:05:07.134 "vhost_create_scsi_controller", 00:05:07.134 "thread_set_cpumask", 00:05:07.134 "framework_get_scheduler", 00:05:07.134 "framework_set_scheduler", 00:05:07.134 "framework_get_reactors", 00:05:07.134 "thread_get_io_channels", 00:05:07.134 "thread_get_pollers", 00:05:07.134 "thread_get_stats", 00:05:07.134 "framework_monitor_context_switch", 00:05:07.134 "spdk_kill_instance", 00:05:07.134 "log_enable_timestamps", 00:05:07.134 "log_get_flags", 00:05:07.134 "log_clear_flag", 00:05:07.134 "log_set_flag", 00:05:07.134 "log_get_level", 00:05:07.134 "log_set_level", 00:05:07.134 "log_get_print_level", 00:05:07.134 "log_set_print_level", 00:05:07.134 "framework_enable_cpumask_locks", 00:05:07.134 "framework_disable_cpumask_locks", 00:05:07.134 "framework_wait_init", 00:05:07.134 "framework_start_init", 00:05:07.134 "scsi_get_devices", 00:05:07.134 "bdev_get_histogram", 00:05:07.134 "bdev_enable_histogram", 00:05:07.134 "bdev_set_qos_limit", 00:05:07.134 "bdev_set_qd_sampling_period", 00:05:07.134 "bdev_get_bdevs", 00:05:07.134 "bdev_reset_iostat", 00:05:07.134 "bdev_get_iostat", 00:05:07.134 "bdev_examine", 00:05:07.134 "bdev_wait_for_examine", 00:05:07.134 "bdev_set_options", 00:05:07.134 "notify_get_notifications", 00:05:07.134 "notify_get_types", 00:05:07.134 "accel_get_stats", 00:05:07.134 "accel_set_options", 00:05:07.134 "accel_set_driver", 00:05:07.134 "accel_crypto_key_destroy", 00:05:07.134 "accel_crypto_keys_get", 00:05:07.134 "accel_crypto_key_create", 00:05:07.134 "accel_assign_opc", 00:05:07.134 "accel_get_module_info", 00:05:07.134 "accel_get_opc_assignments", 00:05:07.134 "vmd_rescan", 00:05:07.134 "vmd_remove_device", 00:05:07.134 "vmd_enable", 00:05:07.134 "sock_set_default_impl", 00:05:07.134 "sock_impl_set_options", 00:05:07.134 "sock_impl_get_options", 00:05:07.134 "iobuf_get_stats", 00:05:07.134 "iobuf_set_options", 00:05:07.134 "framework_get_pci_devices", 00:05:07.134 "framework_get_config", 00:05:07.134 "framework_get_subsystems", 00:05:07.134 "trace_get_info", 00:05:07.134 "trace_get_tpoint_group_mask", 00:05:07.134 "trace_disable_tpoint_group", 00:05:07.134 "trace_enable_tpoint_group", 00:05:07.134 "trace_clear_tpoint_mask", 00:05:07.134 "trace_set_tpoint_mask", 00:05:07.134 "spdk_get_version", 00:05:07.134 "rpc_get_methods" 00:05:07.134 ] 00:05:07.134 20:56:20 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:07.134 20:56:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:07.134 20:56:20 -- common/autotest_common.sh@10 -- # set +x 00:05:07.134 20:56:20 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:07.134 20:56:20 -- spdkcli/tcp.sh@38 -- # killprocess 56893 00:05:07.134 20:56:20 -- common/autotest_common.sh@926 -- # '[' -z 56893 ']' 00:05:07.134 20:56:20 -- common/autotest_common.sh@930 -- # kill -0 56893 00:05:07.134 20:56:20 -- common/autotest_common.sh@931 -- # uname 00:05:07.134 20:56:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:07.134 20:56:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56893 00:05:07.134 20:56:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:07.134 20:56:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:07.134 20:56:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56893' 00:05:07.134 killing process with pid 56893 00:05:07.134 20:56:21 -- common/autotest_common.sh@945 -- # kill 56893 00:05:07.134 20:56:21 -- common/autotest_common.sh@950 -- # wait 56893 00:05:09.035 00:05:09.035 real 0m3.915s 00:05:09.035 user 0m7.375s 00:05:09.035 sys 0m0.496s 00:05:09.035 20:56:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.035 20:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.035 ************************************ 00:05:09.035 END TEST spdkcli_tcp 00:05:09.035 ************************************ 00:05:09.035 20:56:22 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:09.035 20:56:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.035 20:56:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.035 20:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.035 ************************************ 00:05:09.035 START TEST dpdk_mem_utility 00:05:09.035 ************************************ 00:05:09.035 20:56:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:09.035 * Looking for test storage... 00:05:09.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:09.035 20:56:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:09.035 20:56:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57003 00:05:09.035 20:56:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57003 00:05:09.035 20:56:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.035 20:56:22 -- common/autotest_common.sh@819 -- # '[' -z 57003 ']' 00:05:09.036 20:56:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.036 20:56:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:09.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.036 20:56:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.036 20:56:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:09.036 20:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.295 [2024-07-13 20:56:23.047081] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:09.295 [2024-07-13 20:56:23.047240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57003 ] 00:05:09.552 [2024-07-13 20:56:23.219761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.552 [2024-07-13 20:56:23.379313] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:09.552 [2024-07-13 20:56:23.379544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.009 20:56:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:11.009 20:56:24 -- common/autotest_common.sh@852 -- # return 0 00:05:11.009 20:56:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:11.009 20:56:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:11.009 20:56:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.009 20:56:24 -- common/autotest_common.sh@10 -- # set +x 00:05:11.009 { 00:05:11.009 "filename": "/tmp/spdk_mem_dump.txt" 00:05:11.009 } 00:05:11.009 20:56:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.009 20:56:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:11.009 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:11.009 1 heaps totaling size 820.000000 MiB 00:05:11.009 size: 820.000000 MiB heap id: 0 00:05:11.009 end heaps---------- 00:05:11.009 8 mempools totaling size 598.116089 MiB 00:05:11.009 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:11.009 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:11.009 size: 84.521057 MiB name: bdev_io_57003 00:05:11.009 size: 51.011292 MiB name: evtpool_57003 00:05:11.009 size: 50.003479 MiB name: msgpool_57003 00:05:11.009 size: 21.763794 MiB name: PDU_Pool 00:05:11.009 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:11.009 size: 0.026123 MiB name: Session_Pool 00:05:11.009 end mempools------- 00:05:11.009 6 memzones totaling size 4.142822 MiB 00:05:11.009 size: 1.000366 MiB name: RG_ring_0_57003 00:05:11.009 size: 1.000366 MiB name: RG_ring_1_57003 00:05:11.009 size: 1.000366 MiB name: RG_ring_4_57003 00:05:11.009 size: 1.000366 MiB name: RG_ring_5_57003 00:05:11.009 size: 0.125366 MiB name: RG_ring_2_57003 00:05:11.009 size: 0.015991 MiB name: RG_ring_3_57003 00:05:11.009 end memzones------- 00:05:11.009 20:56:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:11.009 heap id: 0 total size: 820.000000 MiB number of busy elements: 301 number of free elements: 18 00:05:11.009 list of free elements. size: 18.451294 MiB 00:05:11.009 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:11.009 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:11.009 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:11.009 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:11.009 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:11.009 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:11.009 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:11.009 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:11.009 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:11.009 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:11.009 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:11.009 element at address: 0x200000200000 with size: 0.829224 MiB 00:05:11.009 element at address: 0x20001b000000 with size: 0.563416 MiB 00:05:11.009 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:11.009 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:11.009 element at address: 0x200013800000 with size: 0.469116 MiB 00:05:11.009 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:11.009 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:11.009 list of standard malloc elements. size: 199.284302 MiB 00:05:11.009 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:11.009 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:11.009 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:11.009 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:11.009 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:11.009 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:11.009 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:11.009 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:11.009 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:11.009 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:11.009 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:11.009 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:05:11.009 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:05:11.009 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:05:11.009 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:11.009 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:11.009 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:11.009 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:11.009 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:11.009 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:11.009 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:11.009 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:11.009 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:11.010 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:11.011 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:11.011 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:11.011 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:11.011 list of memzone associated elements. size: 602.264404 MiB 00:05:11.011 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:11.011 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:11.011 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:11.011 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:11.011 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:11.011 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_57003_0 00:05:11.011 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:11.011 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57003_0 00:05:11.011 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:11.011 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57003_0 00:05:11.011 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:11.011 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:11.011 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:11.011 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:11.011 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:11.011 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57003 00:05:11.011 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:11.011 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57003 00:05:11.011 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:11.011 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57003 00:05:11.011 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:11.011 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:11.011 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:11.011 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:11.011 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:11.011 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:11.011 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:11.011 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:11.011 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:11.011 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57003 00:05:11.011 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:11.011 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57003 00:05:11.011 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:11.011 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57003 00:05:11.011 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:11.011 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57003 00:05:11.012 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:11.012 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57003 00:05:11.012 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:11.012 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:11.012 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:11.012 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:11.012 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:11.012 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:11.012 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:11.012 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57003 00:05:11.012 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:11.012 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:11.012 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:11.012 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:11.012 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:11.012 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57003 00:05:11.012 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:11.012 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:11.012 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:11.012 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57003 00:05:11.012 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:11.012 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57003 00:05:11.012 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:11.012 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:11.012 20:56:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:11.012 20:56:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57003 00:05:11.012 20:56:24 -- common/autotest_common.sh@926 -- # '[' -z 57003 ']' 00:05:11.012 20:56:24 -- common/autotest_common.sh@930 -- # kill -0 57003 00:05:11.012 20:56:24 -- common/autotest_common.sh@931 -- # uname 00:05:11.012 20:56:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:11.012 20:56:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57003 00:05:11.012 20:56:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:11.012 killing process with pid 57003 00:05:11.012 20:56:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:11.012 20:56:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57003' 00:05:11.012 20:56:24 -- common/autotest_common.sh@945 -- # kill 57003 00:05:11.012 20:56:24 -- common/autotest_common.sh@950 -- # wait 57003 00:05:12.917 00:05:12.917 real 0m3.679s 00:05:12.917 user 0m3.992s 00:05:12.917 sys 0m0.418s 00:05:12.917 20:56:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.917 20:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.917 ************************************ 00:05:12.917 END TEST dpdk_mem_utility 00:05:12.917 ************************************ 00:05:12.917 20:56:26 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:12.917 20:56:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:12.917 20:56:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.917 20:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.917 ************************************ 00:05:12.917 START TEST event 00:05:12.917 ************************************ 00:05:12.917 20:56:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:12.917 * Looking for test storage... 00:05:12.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:12.917 20:56:26 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:12.917 20:56:26 -- bdev/nbd_common.sh@6 -- # set -e 00:05:12.917 20:56:26 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.917 20:56:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:12.917 20:56:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.917 20:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.917 ************************************ 00:05:12.917 START TEST event_perf 00:05:12.917 ************************************ 00:05:12.917 20:56:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.917 Running I/O for 1 seconds...[2024-07-13 20:56:26.721234] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:12.917 [2024-07-13 20:56:26.721403] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57104 ] 00:05:13.176 [2024-07-13 20:56:26.885935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.176 [2024-07-13 20:56:27.042753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.176 [2024-07-13 20:56:27.042922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.176 [2024-07-13 20:56:27.043011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.176 Running I/O for 1 seconds...[2024-07-13 20:56:27.043256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.552 00:05:14.552 lcore 0: 196429 00:05:14.552 lcore 1: 196429 00:05:14.552 lcore 2: 196428 00:05:14.552 lcore 3: 196429 00:05:14.552 done. 00:05:14.552 00:05:14.552 real 0m1.668s 00:05:14.552 user 0m4.444s 00:05:14.552 sys 0m0.108s 00:05:14.552 ************************************ 00:05:14.552 END TEST event_perf 00:05:14.552 ************************************ 00:05:14.552 20:56:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.552 20:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.552 20:56:28 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:14.552 20:56:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:14.552 20:56:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.552 20:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.552 ************************************ 00:05:14.552 START TEST event_reactor 00:05:14.552 ************************************ 00:05:14.552 20:56:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:14.552 [2024-07-13 20:56:28.434556] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:14.552 [2024-07-13 20:56:28.434719] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57143 ] 00:05:14.810 [2024-07-13 20:56:28.595940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.068 [2024-07-13 20:56:28.750773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.443 test_start 00:05:16.443 oneshot 00:05:16.443 tick 100 00:05:16.443 tick 100 00:05:16.443 tick 250 00:05:16.443 tick 100 00:05:16.443 tick 100 00:05:16.443 tick 250 00:05:16.443 tick 500 00:05:16.443 tick 100 00:05:16.443 tick 100 00:05:16.443 tick 100 00:05:16.443 tick 250 00:05:16.443 tick 100 00:05:16.443 tick 100 00:05:16.443 test_end 00:05:16.443 00:05:16.443 real 0m1.658s 00:05:16.443 user 0m1.461s 00:05:16.443 sys 0m0.088s 00:05:16.443 20:56:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.443 20:56:30 -- common/autotest_common.sh@10 -- # set +x 00:05:16.443 ************************************ 00:05:16.443 END TEST event_reactor 00:05:16.443 ************************************ 00:05:16.443 20:56:30 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.443 20:56:30 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:16.443 20:56:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.443 20:56:30 -- common/autotest_common.sh@10 -- # set +x 00:05:16.443 ************************************ 00:05:16.443 START TEST event_reactor_perf 00:05:16.443 ************************************ 00:05:16.443 20:56:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.443 [2024-07-13 20:56:30.154283] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:16.443 [2024-07-13 20:56:30.154517] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57180 ] 00:05:16.443 [2024-07-13 20:56:30.320176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.702 [2024-07-13 20:56:30.476797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.079 test_start 00:05:18.079 test_end 00:05:18.079 Performance: 324030 events per second 00:05:18.079 00:05:18.079 real 0m1.676s 00:05:18.079 user 0m1.477s 00:05:18.079 sys 0m0.089s 00:05:18.079 20:56:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.079 20:56:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.079 ************************************ 00:05:18.079 END TEST event_reactor_perf 00:05:18.079 ************************************ 00:05:18.079 20:56:31 -- event/event.sh@49 -- # uname -s 00:05:18.079 20:56:31 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:18.079 20:56:31 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:18.079 20:56:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.079 20:56:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.079 20:56:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.079 ************************************ 00:05:18.079 START TEST event_scheduler 00:05:18.079 ************************************ 00:05:18.079 20:56:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:18.079 * Looking for test storage... 00:05:18.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:18.079 20:56:31 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:18.079 20:56:31 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57247 00:05:18.079 20:56:31 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.079 20:56:31 -- scheduler/scheduler.sh@37 -- # waitforlisten 57247 00:05:18.079 20:56:31 -- common/autotest_common.sh@819 -- # '[' -z 57247 ']' 00:05:18.079 20:56:31 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:18.079 20:56:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.079 20:56:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:18.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.079 20:56:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.079 20:56:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:18.079 20:56:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.338 [2024-07-13 20:56:32.008937] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:18.338 [2024-07-13 20:56:32.009103] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57247 ] 00:05:18.338 [2024-07-13 20:56:32.180253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.597 [2024-07-13 20:56:32.386071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.597 [2024-07-13 20:56:32.386211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.597 [2024-07-13 20:56:32.386601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.597 [2024-07-13 20:56:32.386762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.164 20:56:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:19.164 20:56:32 -- common/autotest_common.sh@852 -- # return 0 00:05:19.164 20:56:32 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:19.164 20:56:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.164 20:56:32 -- common/autotest_common.sh@10 -- # set +x 00:05:19.164 POWER: Env isn't set yet! 00:05:19.164 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:19.164 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.164 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.164 POWER: Attempting to initialise PSTAT power management... 00:05:19.164 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.164 POWER: Cannot set governor of lcore 0 to performance 00:05:19.164 POWER: Attempting to initialise AMD PSTATE power management... 00:05:19.164 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.164 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.164 POWER: Attempting to initialise CPPC power management... 00:05:19.164 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.164 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.164 POWER: Attempting to initialise VM power management... 00:05:19.164 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:19.164 POWER: Unable to set Power Management Environment for lcore 0 00:05:19.164 [2024-07-13 20:56:32.944405] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:19.164 [2024-07-13 20:56:32.944439] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:19.164 [2024-07-13 20:56:32.944462] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:19.164 [2024-07-13 20:56:32.944492] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:19.164 [2024-07-13 20:56:32.944515] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:19.164 [2024-07-13 20:56:32.944535] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:19.164 20:56:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.164 20:56:32 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:19.164 20:56:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.164 20:56:32 -- common/autotest_common.sh@10 -- # set +x 00:05:19.424 [2024-07-13 20:56:33.224322] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:19.424 20:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.424 20:56:33 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:19.424 20:56:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.424 20:56:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.424 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 ************************************ 00:05:19.425 START TEST scheduler_create_thread 00:05:19.425 ************************************ 00:05:19.425 20:56:33 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:19.425 20:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.425 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 2 00:05:19.425 20:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:19.425 20:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.425 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 3 00:05:19.425 20:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:19.425 20:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.425 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 4 00:05:19.425 20:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:19.425 20:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.425 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 5 00:05:19.425 20:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:19.425 20:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.425 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 6 00:05:19.425 20:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:19.425 20:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.425 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 7 00:05:19.425 20:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:19.425 20:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.425 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 8 00:05:19.425 20:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:19.425 20:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.425 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 9 00:05:19.425 20:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:19.425 20:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.425 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 10 00:05:19.425 20:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:19.425 20:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.425 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 20:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:19.425 20:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.425 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 20:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.425 20:56:33 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:19.426 20:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:19.426 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:20.804 20:56:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:20.804 20:56:34 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:20.804 20:56:34 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:20.804 20:56:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:20.804 20:56:34 -- common/autotest_common.sh@10 -- # set +x 00:05:21.740 20:56:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:21.740 00:05:21.740 real 0m2.138s 00:05:21.740 user 0m0.019s 00:05:21.740 sys 0m0.006s 00:05:21.740 ************************************ 00:05:21.740 END TEST scheduler_create_thread 00:05:21.740 ************************************ 00:05:21.740 20:56:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.740 20:56:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.740 20:56:35 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:21.740 20:56:35 -- scheduler/scheduler.sh@46 -- # killprocess 57247 00:05:21.740 20:56:35 -- common/autotest_common.sh@926 -- # '[' -z 57247 ']' 00:05:21.740 20:56:35 -- common/autotest_common.sh@930 -- # kill -0 57247 00:05:21.740 20:56:35 -- common/autotest_common.sh@931 -- # uname 00:05:21.740 20:56:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:21.740 20:56:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57247 00:05:21.740 20:56:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:21.740 killing process with pid 57247 00:05:21.740 20:56:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:21.740 20:56:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57247' 00:05:21.740 20:56:35 -- common/autotest_common.sh@945 -- # kill 57247 00:05:21.740 20:56:35 -- common/autotest_common.sh@950 -- # wait 57247 00:05:21.999 [2024-07-13 20:56:35.856221] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:22.935 00:05:22.935 real 0m4.963s 00:05:22.935 user 0m8.354s 00:05:22.935 sys 0m0.379s 00:05:22.935 20:56:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.935 ************************************ 00:05:22.935 END TEST event_scheduler 00:05:22.935 ************************************ 00:05:22.935 20:56:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.935 20:56:36 -- event/event.sh@51 -- # modprobe -n nbd 00:05:22.935 20:56:36 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:22.935 20:56:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.935 20:56:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.935 20:56:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.194 ************************************ 00:05:23.194 START TEST app_repeat 00:05:23.194 ************************************ 00:05:23.194 20:56:36 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:23.194 20:56:36 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.194 20:56:36 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.194 20:56:36 -- event/event.sh@13 -- # local nbd_list 00:05:23.194 20:56:36 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.194 20:56:36 -- event/event.sh@14 -- # local bdev_list 00:05:23.194 20:56:36 -- event/event.sh@15 -- # local repeat_times=4 00:05:23.194 20:56:36 -- event/event.sh@17 -- # modprobe nbd 00:05:23.194 20:56:36 -- event/event.sh@19 -- # repeat_pid=57353 00:05:23.194 Process app_repeat pid: 57353 00:05:23.194 20:56:36 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.194 20:56:36 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:23.194 20:56:36 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57353' 00:05:23.194 spdk_app_start Round 0 00:05:23.194 20:56:36 -- event/event.sh@23 -- # for i in {0..2} 00:05:23.194 20:56:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:23.194 20:56:36 -- event/event.sh@25 -- # waitforlisten 57353 /var/tmp/spdk-nbd.sock 00:05:23.194 20:56:36 -- common/autotest_common.sh@819 -- # '[' -z 57353 ']' 00:05:23.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.194 20:56:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.194 20:56:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:23.194 20:56:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.194 20:56:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:23.194 20:56:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.194 [2024-07-13 20:56:36.909847] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:23.194 [2024-07-13 20:56:36.910054] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57353 ] 00:05:23.194 [2024-07-13 20:56:37.065590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.453 [2024-07-13 20:56:37.230914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.453 [2024-07-13 20:56:37.230930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.018 20:56:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:24.018 20:56:37 -- common/autotest_common.sh@852 -- # return 0 00:05:24.018 20:56:37 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.317 Malloc0 00:05:24.317 20:56:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.576 Malloc1 00:05:24.576 20:56:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@12 -- # local i 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.576 20:56:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.834 /dev/nbd0 00:05:24.834 20:56:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.834 20:56:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.834 20:56:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:24.834 20:56:38 -- common/autotest_common.sh@857 -- # local i 00:05:24.834 20:56:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:24.834 20:56:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:24.834 20:56:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:24.834 20:56:38 -- common/autotest_common.sh@861 -- # break 00:05:24.834 20:56:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:24.834 20:56:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:24.834 20:56:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.834 1+0 records in 00:05:24.834 1+0 records out 00:05:24.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413651 s, 9.9 MB/s 00:05:24.834 20:56:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.834 20:56:38 -- common/autotest_common.sh@874 -- # size=4096 00:05:24.834 20:56:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.834 20:56:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:24.834 20:56:38 -- common/autotest_common.sh@877 -- # return 0 00:05:24.835 20:56:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.835 20:56:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.835 20:56:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.093 /dev/nbd1 00:05:25.352 20:56:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.352 20:56:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.352 20:56:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:25.352 20:56:39 -- common/autotest_common.sh@857 -- # local i 00:05:25.352 20:56:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:25.352 20:56:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:25.352 20:56:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:25.352 20:56:39 -- common/autotest_common.sh@861 -- # break 00:05:25.352 20:56:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:25.352 20:56:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:25.352 20:56:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.352 1+0 records in 00:05:25.352 1+0 records out 00:05:25.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346334 s, 11.8 MB/s 00:05:25.352 20:56:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.352 20:56:39 -- common/autotest_common.sh@874 -- # size=4096 00:05:25.352 20:56:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.352 20:56:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:25.352 20:56:39 -- common/autotest_common.sh@877 -- # return 0 00:05:25.352 20:56:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.352 20:56:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.352 20:56:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.352 20:56:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.352 20:56:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.611 { 00:05:25.611 "nbd_device": "/dev/nbd0", 00:05:25.611 "bdev_name": "Malloc0" 00:05:25.611 }, 00:05:25.611 { 00:05:25.611 "nbd_device": "/dev/nbd1", 00:05:25.611 "bdev_name": "Malloc1" 00:05:25.611 } 00:05:25.611 ]' 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.611 { 00:05:25.611 "nbd_device": "/dev/nbd0", 00:05:25.611 "bdev_name": "Malloc0" 00:05:25.611 }, 00:05:25.611 { 00:05:25.611 "nbd_device": "/dev/nbd1", 00:05:25.611 "bdev_name": "Malloc1" 00:05:25.611 } 00:05:25.611 ]' 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.611 /dev/nbd1' 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.611 /dev/nbd1' 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.611 256+0 records in 00:05:25.611 256+0 records out 00:05:25.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590948 s, 177 MB/s 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.611 256+0 records in 00:05:25.611 256+0 records out 00:05:25.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294149 s, 35.6 MB/s 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.611 256+0 records in 00:05:25.611 256+0 records out 00:05:25.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329597 s, 31.8 MB/s 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.611 20:56:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.612 20:56:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.612 20:56:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.612 20:56:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.612 20:56:39 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.612 20:56:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.612 20:56:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.612 20:56:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.612 20:56:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.612 20:56:39 -- bdev/nbd_common.sh@51 -- # local i 00:05:25.612 20:56:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.612 20:56:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.870 20:56:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.870 20:56:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.870 20:56:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.870 20:56:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.870 20:56:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.870 20:56:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.870 20:56:39 -- bdev/nbd_common.sh@41 -- # break 00:05:25.870 20:56:39 -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.870 20:56:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.870 20:56:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.129 20:56:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.129 20:56:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.129 20:56:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.129 20:56:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.129 20:56:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.129 20:56:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.129 20:56:39 -- bdev/nbd_common.sh@41 -- # break 00:05:26.129 20:56:39 -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.129 20:56:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.129 20:56:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.129 20:56:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.388 20:56:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.388 20:56:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.388 20:56:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.388 20:56:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.388 20:56:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.388 20:56:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.388 20:56:40 -- bdev/nbd_common.sh@65 -- # true 00:05:26.388 20:56:40 -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.388 20:56:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.388 20:56:40 -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.388 20:56:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.388 20:56:40 -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.388 20:56:40 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.955 20:56:40 -- event/event.sh@35 -- # sleep 3 00:05:27.890 [2024-07-13 20:56:41.677209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.149 [2024-07-13 20:56:41.843008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.149 [2024-07-13 20:56:41.843014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.149 [2024-07-13 20:56:42.002610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.150 [2024-07-13 20:56:42.002718] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.053 20:56:43 -- event/event.sh@23 -- # for i in {0..2} 00:05:30.053 spdk_app_start Round 1 00:05:30.053 20:56:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:30.053 20:56:43 -- event/event.sh@25 -- # waitforlisten 57353 /var/tmp/spdk-nbd.sock 00:05:30.053 20:56:43 -- common/autotest_common.sh@819 -- # '[' -z 57353 ']' 00:05:30.053 20:56:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.053 20:56:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:30.053 20:56:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.053 20:56:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:30.053 20:56:43 -- common/autotest_common.sh@10 -- # set +x 00:05:30.053 20:56:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:30.053 20:56:43 -- common/autotest_common.sh@852 -- # return 0 00:05:30.053 20:56:43 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.312 Malloc0 00:05:30.312 20:56:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.570 Malloc1 00:05:30.570 20:56:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@12 -- # local i 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.570 20:56:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.829 /dev/nbd0 00:05:30.829 20:56:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.829 20:56:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.829 20:56:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:30.829 20:56:44 -- common/autotest_common.sh@857 -- # local i 00:05:30.829 20:56:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:30.829 20:56:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:30.829 20:56:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:30.829 20:56:44 -- common/autotest_common.sh@861 -- # break 00:05:30.829 20:56:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:30.829 20:56:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:30.829 20:56:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.829 1+0 records in 00:05:30.829 1+0 records out 00:05:30.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320085 s, 12.8 MB/s 00:05:30.829 20:56:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.829 20:56:44 -- common/autotest_common.sh@874 -- # size=4096 00:05:30.829 20:56:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.829 20:56:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:30.829 20:56:44 -- common/autotest_common.sh@877 -- # return 0 00:05:30.829 20:56:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.829 20:56:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.829 20:56:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.088 /dev/nbd1 00:05:31.088 20:56:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.088 20:56:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.088 20:56:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:31.088 20:56:44 -- common/autotest_common.sh@857 -- # local i 00:05:31.088 20:56:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:31.088 20:56:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:31.088 20:56:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:31.088 20:56:44 -- common/autotest_common.sh@861 -- # break 00:05:31.088 20:56:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:31.088 20:56:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:31.088 20:56:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.088 1+0 records in 00:05:31.088 1+0 records out 00:05:31.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024761 s, 16.5 MB/s 00:05:31.088 20:56:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.088 20:56:44 -- common/autotest_common.sh@874 -- # size=4096 00:05:31.088 20:56:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.088 20:56:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:31.088 20:56:44 -- common/autotest_common.sh@877 -- # return 0 00:05:31.088 20:56:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.088 20:56:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.088 20:56:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.088 20:56:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.088 20:56:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.347 { 00:05:31.347 "nbd_device": "/dev/nbd0", 00:05:31.347 "bdev_name": "Malloc0" 00:05:31.347 }, 00:05:31.347 { 00:05:31.347 "nbd_device": "/dev/nbd1", 00:05:31.347 "bdev_name": "Malloc1" 00:05:31.347 } 00:05:31.347 ]' 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.347 { 00:05:31.347 "nbd_device": "/dev/nbd0", 00:05:31.347 "bdev_name": "Malloc0" 00:05:31.347 }, 00:05:31.347 { 00:05:31.347 "nbd_device": "/dev/nbd1", 00:05:31.347 "bdev_name": "Malloc1" 00:05:31.347 } 00:05:31.347 ]' 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.347 /dev/nbd1' 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.347 /dev/nbd1' 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.347 20:56:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.348 256+0 records in 00:05:31.348 256+0 records out 00:05:31.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00673231 s, 156 MB/s 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.348 256+0 records in 00:05:31.348 256+0 records out 00:05:31.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246318 s, 42.6 MB/s 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.348 256+0 records in 00:05:31.348 256+0 records out 00:05:31.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290623 s, 36.1 MB/s 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@51 -- # local i 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.348 20:56:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.606 20:56:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.606 20:56:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.606 20:56:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.606 20:56:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.606 20:56:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.606 20:56:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.606 20:56:45 -- bdev/nbd_common.sh@41 -- # break 00:05:31.606 20:56:45 -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.606 20:56:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.606 20:56:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.865 20:56:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.865 20:56:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.865 20:56:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.865 20:56:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.865 20:56:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.865 20:56:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.124 20:56:45 -- bdev/nbd_common.sh@41 -- # break 00:05:32.124 20:56:45 -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.124 20:56:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.124 20:56:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.124 20:56:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.124 20:56:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.124 20:56:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.124 20:56:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.384 20:56:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.384 20:56:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.384 20:56:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.384 20:56:46 -- bdev/nbd_common.sh@65 -- # true 00:05:32.384 20:56:46 -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.384 20:56:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.384 20:56:46 -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.384 20:56:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.384 20:56:46 -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.384 20:56:46 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.643 20:56:46 -- event/event.sh@35 -- # sleep 3 00:05:33.585 [2024-07-13 20:56:47.420835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.843 [2024-07-13 20:56:47.591823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.843 [2024-07-13 20:56:47.591825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.843 [2024-07-13 20:56:47.726288] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.843 [2024-07-13 20:56:47.726415] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.749 20:56:49 -- event/event.sh@23 -- # for i in {0..2} 00:05:35.749 spdk_app_start Round 2 00:05:35.749 20:56:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.749 20:56:49 -- event/event.sh@25 -- # waitforlisten 57353 /var/tmp/spdk-nbd.sock 00:05:35.749 20:56:49 -- common/autotest_common.sh@819 -- # '[' -z 57353 ']' 00:05:35.749 20:56:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.749 20:56:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:35.749 20:56:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.749 20:56:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:35.749 20:56:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.008 20:56:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:36.008 20:56:49 -- common/autotest_common.sh@852 -- # return 0 00:05:36.008 20:56:49 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.267 Malloc0 00:05:36.267 20:56:49 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.526 Malloc1 00:05:36.526 20:56:50 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@12 -- # local i 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.526 20:56:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.526 /dev/nbd0 00:05:36.785 20:56:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.785 20:56:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.785 20:56:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:36.785 20:56:50 -- common/autotest_common.sh@857 -- # local i 00:05:36.785 20:56:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:36.785 20:56:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:36.785 20:56:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:36.785 20:56:50 -- common/autotest_common.sh@861 -- # break 00:05:36.785 20:56:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:36.785 20:56:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:36.785 20:56:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.785 1+0 records in 00:05:36.785 1+0 records out 00:05:36.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269561 s, 15.2 MB/s 00:05:36.785 20:56:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.785 20:56:50 -- common/autotest_common.sh@874 -- # size=4096 00:05:36.785 20:56:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.785 20:56:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:36.785 20:56:50 -- common/autotest_common.sh@877 -- # return 0 00:05:36.785 20:56:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.785 20:56:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.785 20:56:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.044 /dev/nbd1 00:05:37.044 20:56:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.044 20:56:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.044 20:56:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:37.044 20:56:50 -- common/autotest_common.sh@857 -- # local i 00:05:37.044 20:56:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:37.044 20:56:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:37.044 20:56:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:37.044 20:56:50 -- common/autotest_common.sh@861 -- # break 00:05:37.044 20:56:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:37.044 20:56:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:37.044 20:56:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.044 1+0 records in 00:05:37.044 1+0 records out 00:05:37.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028047 s, 14.6 MB/s 00:05:37.044 20:56:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.045 20:56:50 -- common/autotest_common.sh@874 -- # size=4096 00:05:37.045 20:56:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.045 20:56:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:37.045 20:56:50 -- common/autotest_common.sh@877 -- # return 0 00:05:37.045 20:56:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.045 20:56:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.045 20:56:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.045 20:56:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.045 20:56:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.305 { 00:05:37.305 "nbd_device": "/dev/nbd0", 00:05:37.305 "bdev_name": "Malloc0" 00:05:37.305 }, 00:05:37.305 { 00:05:37.305 "nbd_device": "/dev/nbd1", 00:05:37.305 "bdev_name": "Malloc1" 00:05:37.305 } 00:05:37.305 ]' 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.305 { 00:05:37.305 "nbd_device": "/dev/nbd0", 00:05:37.305 "bdev_name": "Malloc0" 00:05:37.305 }, 00:05:37.305 { 00:05:37.305 "nbd_device": "/dev/nbd1", 00:05:37.305 "bdev_name": "Malloc1" 00:05:37.305 } 00:05:37.305 ]' 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.305 /dev/nbd1' 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.305 /dev/nbd1' 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.305 256+0 records in 00:05:37.305 256+0 records out 00:05:37.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102827 s, 102 MB/s 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.305 256+0 records in 00:05:37.305 256+0 records out 00:05:37.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246685 s, 42.5 MB/s 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.305 256+0 records in 00:05:37.305 256+0 records out 00:05:37.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269533 s, 38.9 MB/s 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@51 -- # local i 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.305 20:56:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.564 20:56:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.564 20:56:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.564 20:56:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.564 20:56:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.564 20:56:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.564 20:56:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.564 20:56:51 -- bdev/nbd_common.sh@41 -- # break 00:05:37.564 20:56:51 -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.564 20:56:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.564 20:56:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.823 20:56:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.823 20:56:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.823 20:56:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.823 20:56:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.823 20:56:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.823 20:56:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.823 20:56:51 -- bdev/nbd_common.sh@41 -- # break 00:05:37.823 20:56:51 -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.823 20:56:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.823 20:56:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.823 20:56:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.082 20:56:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.082 20:56:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.082 20:56:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.341 20:56:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.341 20:56:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.341 20:56:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.341 20:56:52 -- bdev/nbd_common.sh@65 -- # true 00:05:38.341 20:56:52 -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.341 20:56:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.341 20:56:52 -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.341 20:56:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.341 20:56:52 -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.341 20:56:52 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.604 20:56:52 -- event/event.sh@35 -- # sleep 3 00:05:39.577 [2024-07-13 20:56:53.353017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.836 [2024-07-13 20:56:53.511166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.836 [2024-07-13 20:56:53.511173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.836 [2024-07-13 20:56:53.651993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.836 [2024-07-13 20:56:53.652067] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.740 20:56:55 -- event/event.sh@38 -- # waitforlisten 57353 /var/tmp/spdk-nbd.sock 00:05:41.740 20:56:55 -- common/autotest_common.sh@819 -- # '[' -z 57353 ']' 00:05:41.740 20:56:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.740 20:56:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.740 20:56:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.740 20:56:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.740 20:56:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.740 20:56:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.740 20:56:55 -- common/autotest_common.sh@852 -- # return 0 00:05:41.740 20:56:55 -- event/event.sh@39 -- # killprocess 57353 00:05:41.740 20:56:55 -- common/autotest_common.sh@926 -- # '[' -z 57353 ']' 00:05:41.740 20:56:55 -- common/autotest_common.sh@930 -- # kill -0 57353 00:05:41.740 20:56:55 -- common/autotest_common.sh@931 -- # uname 00:05:41.740 20:56:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:41.740 20:56:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57353 00:05:42.000 killing process with pid 57353 00:05:42.000 20:56:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:42.000 20:56:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:42.000 20:56:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57353' 00:05:42.000 20:56:55 -- common/autotest_common.sh@945 -- # kill 57353 00:05:42.000 20:56:55 -- common/autotest_common.sh@950 -- # wait 57353 00:05:42.939 spdk_app_start is called in Round 0. 00:05:42.939 Shutdown signal received, stop current app iteration 00:05:42.939 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:05:42.939 spdk_app_start is called in Round 1. 00:05:42.939 Shutdown signal received, stop current app iteration 00:05:42.939 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:05:42.939 spdk_app_start is called in Round 2. 00:05:42.939 Shutdown signal received, stop current app iteration 00:05:42.939 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:05:42.939 spdk_app_start is called in Round 3. 00:05:42.939 Shutdown signal received, stop current app iteration 00:05:42.939 20:56:56 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:42.939 20:56:56 -- event/event.sh@42 -- # return 0 00:05:42.939 00:05:42.939 real 0m19.692s 00:05:42.939 user 0m42.627s 00:05:42.939 sys 0m2.496s 00:05:42.939 20:56:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.939 ************************************ 00:05:42.939 20:56:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.939 END TEST app_repeat 00:05:42.939 ************************************ 00:05:42.939 20:56:56 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:42.939 20:56:56 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.939 20:56:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.939 20:56:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.939 20:56:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.939 ************************************ 00:05:42.939 START TEST cpu_locks 00:05:42.939 ************************************ 00:05:42.939 20:56:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.939 * Looking for test storage... 00:05:42.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.939 20:56:56 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:42.939 20:56:56 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:42.939 20:56:56 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:42.939 20:56:56 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:42.939 20:56:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.939 20:56:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.939 20:56:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.939 ************************************ 00:05:42.939 START TEST default_locks 00:05:42.939 ************************************ 00:05:42.939 20:56:56 -- common/autotest_common.sh@1104 -- # default_locks 00:05:42.939 20:56:56 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57791 00:05:42.939 20:56:56 -- event/cpu_locks.sh@47 -- # waitforlisten 57791 00:05:42.939 20:56:56 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.939 20:56:56 -- common/autotest_common.sh@819 -- # '[' -z 57791 ']' 00:05:42.939 20:56:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.939 20:56:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.939 20:56:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.939 20:56:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.939 20:56:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.939 [2024-07-13 20:56:56.787718] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:42.939 [2024-07-13 20:56:56.787883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57791 ] 00:05:43.198 [2024-07-13 20:56:56.940565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.198 [2024-07-13 20:56:57.091498] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.198 [2024-07-13 20:56:57.091735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.135 20:56:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.135 20:56:57 -- common/autotest_common.sh@852 -- # return 0 00:05:44.135 20:56:57 -- event/cpu_locks.sh@49 -- # locks_exist 57791 00:05:44.135 20:56:57 -- event/cpu_locks.sh@22 -- # lslocks -p 57791 00:05:44.135 20:56:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.393 20:56:58 -- event/cpu_locks.sh@50 -- # killprocess 57791 00:05:44.393 20:56:58 -- common/autotest_common.sh@926 -- # '[' -z 57791 ']' 00:05:44.393 20:56:58 -- common/autotest_common.sh@930 -- # kill -0 57791 00:05:44.393 20:56:58 -- common/autotest_common.sh@931 -- # uname 00:05:44.393 20:56:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:44.393 20:56:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57791 00:05:44.393 20:56:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:44.393 20:56:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:44.393 killing process with pid 57791 00:05:44.393 20:56:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57791' 00:05:44.393 20:56:58 -- common/autotest_common.sh@945 -- # kill 57791 00:05:44.394 20:56:58 -- common/autotest_common.sh@950 -- # wait 57791 00:05:46.311 20:56:59 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57791 00:05:46.311 20:56:59 -- common/autotest_common.sh@640 -- # local es=0 00:05:46.311 20:56:59 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57791 00:05:46.311 20:56:59 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:46.311 20:56:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:46.311 20:56:59 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:46.311 20:56:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:46.311 20:56:59 -- common/autotest_common.sh@643 -- # waitforlisten 57791 00:05:46.311 20:56:59 -- common/autotest_common.sh@819 -- # '[' -z 57791 ']' 00:05:46.311 20:56:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.311 20:56:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.311 20:56:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.311 20:56:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.311 20:56:59 -- common/autotest_common.sh@10 -- # set +x 00:05:46.311 ERROR: process (pid: 57791) is no longer running 00:05:46.311 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57791) - No such process 00:05:46.311 20:56:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:46.311 20:56:59 -- common/autotest_common.sh@852 -- # return 1 00:05:46.311 20:56:59 -- common/autotest_common.sh@643 -- # es=1 00:05:46.311 20:56:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:46.311 20:56:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:46.311 20:56:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:46.311 20:56:59 -- event/cpu_locks.sh@54 -- # no_locks 00:05:46.311 20:56:59 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.311 20:56:59 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.311 20:56:59 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.311 00:05:46.311 real 0m3.205s 00:05:46.311 user 0m3.336s 00:05:46.311 sys 0m0.559s 00:05:46.311 20:56:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.311 20:56:59 -- common/autotest_common.sh@10 -- # set +x 00:05:46.311 ************************************ 00:05:46.311 END TEST default_locks 00:05:46.311 ************************************ 00:05:46.311 20:56:59 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:46.311 20:56:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.311 20:56:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.311 20:56:59 -- common/autotest_common.sh@10 -- # set +x 00:05:46.311 ************************************ 00:05:46.311 START TEST default_locks_via_rpc 00:05:46.311 ************************************ 00:05:46.311 20:56:59 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:46.311 20:56:59 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57855 00:05:46.311 20:56:59 -- event/cpu_locks.sh@63 -- # waitforlisten 57855 00:05:46.311 20:56:59 -- common/autotest_common.sh@819 -- # '[' -z 57855 ']' 00:05:46.311 20:56:59 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.311 20:56:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.311 20:56:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.311 20:56:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.311 20:56:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.311 20:56:59 -- common/autotest_common.sh@10 -- # set +x 00:05:46.311 [2024-07-13 20:57:00.068946] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:46.311 [2024-07-13 20:57:00.069138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57855 ] 00:05:46.569 [2024-07-13 20:57:00.241921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.569 [2024-07-13 20:57:00.435653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.569 [2024-07-13 20:57:00.435890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.946 20:57:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.946 20:57:01 -- common/autotest_common.sh@852 -- # return 0 00:05:47.946 20:57:01 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:47.946 20:57:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.946 20:57:01 -- common/autotest_common.sh@10 -- # set +x 00:05:47.946 20:57:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.946 20:57:01 -- event/cpu_locks.sh@67 -- # no_locks 00:05:47.946 20:57:01 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.946 20:57:01 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.946 20:57:01 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.946 20:57:01 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.946 20:57:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.946 20:57:01 -- common/autotest_common.sh@10 -- # set +x 00:05:47.946 20:57:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.946 20:57:01 -- event/cpu_locks.sh@71 -- # locks_exist 57855 00:05:47.946 20:57:01 -- event/cpu_locks.sh@22 -- # lslocks -p 57855 00:05:47.946 20:57:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.204 20:57:02 -- event/cpu_locks.sh@73 -- # killprocess 57855 00:05:48.204 20:57:02 -- common/autotest_common.sh@926 -- # '[' -z 57855 ']' 00:05:48.204 20:57:02 -- common/autotest_common.sh@930 -- # kill -0 57855 00:05:48.204 20:57:02 -- common/autotest_common.sh@931 -- # uname 00:05:48.204 20:57:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:48.204 20:57:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57855 00:05:48.462 20:57:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:48.462 20:57:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:48.462 killing process with pid 57855 00:05:48.462 20:57:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57855' 00:05:48.462 20:57:02 -- common/autotest_common.sh@945 -- # kill 57855 00:05:48.462 20:57:02 -- common/autotest_common.sh@950 -- # wait 57855 00:05:50.372 00:05:50.372 real 0m3.972s 00:05:50.372 user 0m4.285s 00:05:50.372 sys 0m0.586s 00:05:50.372 20:57:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.372 ************************************ 00:05:50.372 END TEST default_locks_via_rpc 00:05:50.372 ************************************ 00:05:50.372 20:57:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.372 20:57:03 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:50.372 20:57:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.372 20:57:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.372 20:57:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.372 ************************************ 00:05:50.372 START TEST non_locking_app_on_locked_coremask 00:05:50.372 ************************************ 00:05:50.372 20:57:03 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:50.372 20:57:03 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57931 00:05:50.372 20:57:03 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.372 20:57:03 -- event/cpu_locks.sh@81 -- # waitforlisten 57931 /var/tmp/spdk.sock 00:05:50.372 20:57:03 -- common/autotest_common.sh@819 -- # '[' -z 57931 ']' 00:05:50.372 20:57:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.372 20:57:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.373 20:57:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.373 20:57:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.373 20:57:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.373 [2024-07-13 20:57:04.082652] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:50.373 [2024-07-13 20:57:04.082831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57931 ] 00:05:50.373 [2024-07-13 20:57:04.258433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.630 [2024-07-13 20:57:04.412183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.630 [2024-07-13 20:57:04.412432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.004 20:57:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.004 20:57:05 -- common/autotest_common.sh@852 -- # return 0 00:05:52.004 20:57:05 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57955 00:05:52.004 20:57:05 -- event/cpu_locks.sh@85 -- # waitforlisten 57955 /var/tmp/spdk2.sock 00:05:52.004 20:57:05 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:52.004 20:57:05 -- common/autotest_common.sh@819 -- # '[' -z 57955 ']' 00:05:52.004 20:57:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.004 20:57:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.004 20:57:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.004 20:57:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.004 20:57:05 -- common/autotest_common.sh@10 -- # set +x 00:05:52.004 [2024-07-13 20:57:05.830494] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:52.004 [2024-07-13 20:57:05.830653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57955 ] 00:05:52.263 [2024-07-13 20:57:05.993220] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.263 [2024-07-13 20:57:05.993303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.522 [2024-07-13 20:57:06.308887] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.522 [2024-07-13 20:57:06.309123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.425 20:57:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.425 20:57:08 -- common/autotest_common.sh@852 -- # return 0 00:05:54.425 20:57:08 -- event/cpu_locks.sh@87 -- # locks_exist 57931 00:05:54.425 20:57:08 -- event/cpu_locks.sh@22 -- # lslocks -p 57931 00:05:54.425 20:57:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.994 20:57:08 -- event/cpu_locks.sh@89 -- # killprocess 57931 00:05:54.994 20:57:08 -- common/autotest_common.sh@926 -- # '[' -z 57931 ']' 00:05:54.994 20:57:08 -- common/autotest_common.sh@930 -- # kill -0 57931 00:05:54.994 20:57:08 -- common/autotest_common.sh@931 -- # uname 00:05:54.994 20:57:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.994 20:57:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57931 00:05:54.994 20:57:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:54.994 20:57:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:54.994 killing process with pid 57931 00:05:54.994 20:57:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57931' 00:05:54.994 20:57:08 -- common/autotest_common.sh@945 -- # kill 57931 00:05:54.994 20:57:08 -- common/autotest_common.sh@950 -- # wait 57931 00:05:59.200 20:57:12 -- event/cpu_locks.sh@90 -- # killprocess 57955 00:05:59.200 20:57:12 -- common/autotest_common.sh@926 -- # '[' -z 57955 ']' 00:05:59.200 20:57:12 -- common/autotest_common.sh@930 -- # kill -0 57955 00:05:59.200 20:57:12 -- common/autotest_common.sh@931 -- # uname 00:05:59.200 20:57:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:59.200 20:57:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57955 00:05:59.200 20:57:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:59.200 20:57:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:59.200 killing process with pid 57955 00:05:59.200 20:57:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57955' 00:05:59.200 20:57:12 -- common/autotest_common.sh@945 -- # kill 57955 00:05:59.200 20:57:12 -- common/autotest_common.sh@950 -- # wait 57955 00:06:00.136 00:06:00.136 real 0m10.033s 00:06:00.136 user 0m10.988s 00:06:00.136 sys 0m1.114s 00:06:00.136 20:57:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.136 20:57:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.136 ************************************ 00:06:00.136 END TEST non_locking_app_on_locked_coremask 00:06:00.136 ************************************ 00:06:00.136 20:57:14 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:00.136 20:57:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.136 20:57:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.136 20:57:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.395 ************************************ 00:06:00.395 START TEST locking_app_on_unlocked_coremask 00:06:00.395 ************************************ 00:06:00.395 20:57:14 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:00.395 20:57:14 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58084 00:06:00.395 20:57:14 -- event/cpu_locks.sh@99 -- # waitforlisten 58084 /var/tmp/spdk.sock 00:06:00.395 20:57:14 -- common/autotest_common.sh@819 -- # '[' -z 58084 ']' 00:06:00.395 20:57:14 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:00.395 20:57:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.395 20:57:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.395 20:57:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.395 20:57:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.395 20:57:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.395 [2024-07-13 20:57:14.179470] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:00.395 [2024-07-13 20:57:14.179645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58084 ] 00:06:00.653 [2024-07-13 20:57:14.349449] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.653 [2024-07-13 20:57:14.349507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.653 [2024-07-13 20:57:14.516633] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.653 [2024-07-13 20:57:14.516897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.057 20:57:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:02.057 20:57:15 -- common/autotest_common.sh@852 -- # return 0 00:06:02.057 20:57:15 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58113 00:06:02.057 20:57:15 -- event/cpu_locks.sh@103 -- # waitforlisten 58113 /var/tmp/spdk2.sock 00:06:02.057 20:57:15 -- common/autotest_common.sh@819 -- # '[' -z 58113 ']' 00:06:02.057 20:57:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.057 20:57:15 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.057 20:57:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.057 20:57:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.057 20:57:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.057 20:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:02.057 [2024-07-13 20:57:15.905254] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:02.057 [2024-07-13 20:57:15.905389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58113 ] 00:06:02.315 [2024-07-13 20:57:16.070181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.573 [2024-07-13 20:57:16.369039] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.573 [2024-07-13 20:57:16.369259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.479 20:57:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.479 20:57:18 -- common/autotest_common.sh@852 -- # return 0 00:06:04.479 20:57:18 -- event/cpu_locks.sh@105 -- # locks_exist 58113 00:06:04.479 20:57:18 -- event/cpu_locks.sh@22 -- # lslocks -p 58113 00:06:04.479 20:57:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.414 20:57:18 -- event/cpu_locks.sh@107 -- # killprocess 58084 00:06:05.414 20:57:18 -- common/autotest_common.sh@926 -- # '[' -z 58084 ']' 00:06:05.414 20:57:18 -- common/autotest_common.sh@930 -- # kill -0 58084 00:06:05.414 20:57:18 -- common/autotest_common.sh@931 -- # uname 00:06:05.414 20:57:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:05.414 20:57:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58084 00:06:05.414 20:57:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:05.414 20:57:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:05.414 killing process with pid 58084 00:06:05.414 20:57:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58084' 00:06:05.414 20:57:19 -- common/autotest_common.sh@945 -- # kill 58084 00:06:05.414 20:57:19 -- common/autotest_common.sh@950 -- # wait 58084 00:06:08.703 20:57:22 -- event/cpu_locks.sh@108 -- # killprocess 58113 00:06:08.703 20:57:22 -- common/autotest_common.sh@926 -- # '[' -z 58113 ']' 00:06:08.703 20:57:22 -- common/autotest_common.sh@930 -- # kill -0 58113 00:06:08.703 20:57:22 -- common/autotest_common.sh@931 -- # uname 00:06:08.703 20:57:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:08.703 20:57:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58113 00:06:08.703 20:57:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:08.703 killing process with pid 58113 00:06:08.703 20:57:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:08.703 20:57:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58113' 00:06:08.703 20:57:22 -- common/autotest_common.sh@945 -- # kill 58113 00:06:08.703 20:57:22 -- common/autotest_common.sh@950 -- # wait 58113 00:06:10.606 00:06:10.606 real 0m10.154s 00:06:10.606 user 0m11.123s 00:06:10.606 sys 0m1.175s 00:06:10.606 20:57:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.606 20:57:24 -- common/autotest_common.sh@10 -- # set +x 00:06:10.606 ************************************ 00:06:10.606 END TEST locking_app_on_unlocked_coremask 00:06:10.606 ************************************ 00:06:10.606 20:57:24 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:10.606 20:57:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:10.606 20:57:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.606 20:57:24 -- common/autotest_common.sh@10 -- # set +x 00:06:10.606 ************************************ 00:06:10.606 START TEST locking_app_on_locked_coremask 00:06:10.606 ************************************ 00:06:10.606 20:57:24 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:10.606 20:57:24 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58243 00:06:10.606 20:57:24 -- event/cpu_locks.sh@116 -- # waitforlisten 58243 /var/tmp/spdk.sock 00:06:10.606 20:57:24 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.606 20:57:24 -- common/autotest_common.sh@819 -- # '[' -z 58243 ']' 00:06:10.606 20:57:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.606 20:57:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:10.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.606 20:57:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.606 20:57:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:10.606 20:57:24 -- common/autotest_common.sh@10 -- # set +x 00:06:10.606 [2024-07-13 20:57:24.387943] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:10.606 [2024-07-13 20:57:24.388100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58243 ] 00:06:10.864 [2024-07-13 20:57:24.556290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.864 [2024-07-13 20:57:24.707549] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.864 [2024-07-13 20:57:24.707828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.237 20:57:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.237 20:57:25 -- common/autotest_common.sh@852 -- # return 0 00:06:12.237 20:57:25 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58272 00:06:12.237 20:57:25 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58272 /var/tmp/spdk2.sock 00:06:12.237 20:57:25 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.237 20:57:25 -- common/autotest_common.sh@640 -- # local es=0 00:06:12.237 20:57:25 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58272 /var/tmp/spdk2.sock 00:06:12.237 20:57:25 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:12.237 20:57:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:12.237 20:57:25 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:12.237 20:57:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:12.237 20:57:25 -- common/autotest_common.sh@643 -- # waitforlisten 58272 /var/tmp/spdk2.sock 00:06:12.237 20:57:25 -- common/autotest_common.sh@819 -- # '[' -z 58272 ']' 00:06:12.237 20:57:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.237 20:57:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:12.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.237 20:57:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.237 20:57:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:12.237 20:57:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.237 [2024-07-13 20:57:26.089533] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:12.237 [2024-07-13 20:57:26.089665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58272 ] 00:06:12.496 [2024-07-13 20:57:26.253947] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58243 has claimed it. 00:06:12.496 [2024-07-13 20:57:26.254025] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:13.077 ERROR: process (pid: 58272) is no longer running 00:06:13.077 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58272) - No such process 00:06:13.077 20:57:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:13.077 20:57:26 -- common/autotest_common.sh@852 -- # return 1 00:06:13.077 20:57:26 -- common/autotest_common.sh@643 -- # es=1 00:06:13.077 20:57:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:13.077 20:57:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:13.077 20:57:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:13.077 20:57:26 -- event/cpu_locks.sh@122 -- # locks_exist 58243 00:06:13.077 20:57:26 -- event/cpu_locks.sh@22 -- # lslocks -p 58243 00:06:13.077 20:57:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.368 20:57:27 -- event/cpu_locks.sh@124 -- # killprocess 58243 00:06:13.368 20:57:27 -- common/autotest_common.sh@926 -- # '[' -z 58243 ']' 00:06:13.368 20:57:27 -- common/autotest_common.sh@930 -- # kill -0 58243 00:06:13.368 20:57:27 -- common/autotest_common.sh@931 -- # uname 00:06:13.368 20:57:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:13.368 20:57:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58243 00:06:13.368 20:57:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:13.368 20:57:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:13.368 killing process with pid 58243 00:06:13.368 20:57:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58243' 00:06:13.368 20:57:27 -- common/autotest_common.sh@945 -- # kill 58243 00:06:13.368 20:57:27 -- common/autotest_common.sh@950 -- # wait 58243 00:06:15.272 00:06:15.272 real 0m4.638s 00:06:15.272 user 0m5.187s 00:06:15.272 sys 0m0.723s 00:06:15.272 20:57:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.272 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:15.272 ************************************ 00:06:15.272 END TEST locking_app_on_locked_coremask 00:06:15.272 ************************************ 00:06:15.272 20:57:28 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:15.272 20:57:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.272 20:57:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.272 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:15.272 ************************************ 00:06:15.272 START TEST locking_overlapped_coremask 00:06:15.272 ************************************ 00:06:15.272 20:57:28 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:15.272 20:57:28 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58331 00:06:15.272 20:57:28 -- event/cpu_locks.sh@133 -- # waitforlisten 58331 /var/tmp/spdk.sock 00:06:15.272 20:57:28 -- common/autotest_common.sh@819 -- # '[' -z 58331 ']' 00:06:15.272 20:57:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.272 20:57:28 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:15.272 20:57:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.272 20:57:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.272 20:57:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.272 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:15.272 [2024-07-13 20:57:29.075521] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:15.272 [2024-07-13 20:57:29.075658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58331 ] 00:06:15.531 [2024-07-13 20:57:29.248993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.531 [2024-07-13 20:57:29.408220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.531 [2024-07-13 20:57:29.408565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.531 [2024-07-13 20:57:29.408916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.531 [2024-07-13 20:57:29.408931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.908 20:57:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.908 20:57:30 -- common/autotest_common.sh@852 -- # return 0 00:06:16.908 20:57:30 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58363 00:06:16.908 20:57:30 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58363 /var/tmp/spdk2.sock 00:06:16.908 20:57:30 -- common/autotest_common.sh@640 -- # local es=0 00:06:16.908 20:57:30 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58363 /var/tmp/spdk2.sock 00:06:16.908 20:57:30 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:16.908 20:57:30 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:16.908 20:57:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.908 20:57:30 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:16.908 20:57:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.908 20:57:30 -- common/autotest_common.sh@643 -- # waitforlisten 58363 /var/tmp/spdk2.sock 00:06:16.908 20:57:30 -- common/autotest_common.sh@819 -- # '[' -z 58363 ']' 00:06:16.908 20:57:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.908 20:57:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.908 20:57:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.908 20:57:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.908 20:57:30 -- common/autotest_common.sh@10 -- # set +x 00:06:16.908 [2024-07-13 20:57:30.790037] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:16.908 [2024-07-13 20:57:30.790169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58363 ] 00:06:17.167 [2024-07-13 20:57:30.962978] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58331 has claimed it. 00:06:17.167 [2024-07-13 20:57:30.963058] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.735 ERROR: process (pid: 58363) is no longer running 00:06:17.735 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58363) - No such process 00:06:17.735 20:57:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.735 20:57:31 -- common/autotest_common.sh@852 -- # return 1 00:06:17.735 20:57:31 -- common/autotest_common.sh@643 -- # es=1 00:06:17.735 20:57:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:17.735 20:57:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:17.735 20:57:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:17.735 20:57:31 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:17.735 20:57:31 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.735 20:57:31 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.735 20:57:31 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.735 20:57:31 -- event/cpu_locks.sh@141 -- # killprocess 58331 00:06:17.735 20:57:31 -- common/autotest_common.sh@926 -- # '[' -z 58331 ']' 00:06:17.735 20:57:31 -- common/autotest_common.sh@930 -- # kill -0 58331 00:06:17.735 20:57:31 -- common/autotest_common.sh@931 -- # uname 00:06:17.735 20:57:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:17.735 20:57:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58331 00:06:17.735 20:57:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:17.735 killing process with pid 58331 00:06:17.735 20:57:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:17.735 20:57:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58331' 00:06:17.735 20:57:31 -- common/autotest_common.sh@945 -- # kill 58331 00:06:17.735 20:57:31 -- common/autotest_common.sh@950 -- # wait 58331 00:06:19.641 00:06:19.641 real 0m4.419s 00:06:19.641 user 0m12.095s 00:06:19.641 sys 0m0.535s 00:06:19.641 20:57:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.641 20:57:33 -- common/autotest_common.sh@10 -- # set +x 00:06:19.641 ************************************ 00:06:19.641 END TEST locking_overlapped_coremask 00:06:19.641 ************************************ 00:06:19.641 20:57:33 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:19.641 20:57:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:19.641 20:57:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.641 20:57:33 -- common/autotest_common.sh@10 -- # set +x 00:06:19.641 ************************************ 00:06:19.641 START TEST locking_overlapped_coremask_via_rpc 00:06:19.641 ************************************ 00:06:19.641 20:57:33 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:19.641 20:57:33 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58424 00:06:19.641 20:57:33 -- event/cpu_locks.sh@149 -- # waitforlisten 58424 /var/tmp/spdk.sock 00:06:19.641 20:57:33 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:19.641 20:57:33 -- common/autotest_common.sh@819 -- # '[' -z 58424 ']' 00:06:19.641 20:57:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.641 20:57:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.641 20:57:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.641 20:57:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.641 20:57:33 -- common/autotest_common.sh@10 -- # set +x 00:06:19.641 [2024-07-13 20:57:33.531691] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:19.641 [2024-07-13 20:57:33.531826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58424 ] 00:06:19.899 [2024-07-13 20:57:33.689861] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.899 [2024-07-13 20:57:33.689930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.159 [2024-07-13 20:57:33.861353] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.159 [2024-07-13 20:57:33.861712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.159 [2024-07-13 20:57:33.862011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.159 [2024-07-13 20:57:33.862193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.536 20:57:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.536 20:57:35 -- common/autotest_common.sh@852 -- # return 0 00:06:21.536 20:57:35 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58450 00:06:21.536 20:57:35 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:21.536 20:57:35 -- event/cpu_locks.sh@153 -- # waitforlisten 58450 /var/tmp/spdk2.sock 00:06:21.536 20:57:35 -- common/autotest_common.sh@819 -- # '[' -z 58450 ']' 00:06:21.536 20:57:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.536 20:57:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.536 20:57:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.536 20:57:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.536 20:57:35 -- common/autotest_common.sh@10 -- # set +x 00:06:21.536 [2024-07-13 20:57:35.256943] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:21.536 [2024-07-13 20:57:35.257081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58450 ] 00:06:21.536 [2024-07-13 20:57:35.429406] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.536 [2024-07-13 20:57:35.429472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.104 [2024-07-13 20:57:35.789627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.104 [2024-07-13 20:57:35.789966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.104 [2024-07-13 20:57:35.794044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.104 [2024-07-13 20:57:35.794064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.009 20:57:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.009 20:57:37 -- common/autotest_common.sh@852 -- # return 0 00:06:24.009 20:57:37 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.009 20:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.009 20:57:37 -- common/autotest_common.sh@10 -- # set +x 00:06:24.009 20:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.009 20:57:37 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.009 20:57:37 -- common/autotest_common.sh@640 -- # local es=0 00:06:24.009 20:57:37 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.009 20:57:37 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:24.009 20:57:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.009 20:57:37 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:24.009 20:57:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.009 20:57:37 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.009 20:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.009 20:57:37 -- common/autotest_common.sh@10 -- # set +x 00:06:24.009 [2024-07-13 20:57:37.635179] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58424 has claimed it. 00:06:24.009 request: 00:06:24.009 { 00:06:24.009 "method": "framework_enable_cpumask_locks", 00:06:24.009 "req_id": 1 00:06:24.009 } 00:06:24.009 Got JSON-RPC error response 00:06:24.009 response: 00:06:24.009 { 00:06:24.009 "code": -32603, 00:06:24.009 "message": "Failed to claim CPU core: 2" 00:06:24.009 } 00:06:24.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.009 20:57:37 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:24.009 20:57:37 -- common/autotest_common.sh@643 -- # es=1 00:06:24.009 20:57:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:24.009 20:57:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:24.009 20:57:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:24.009 20:57:37 -- event/cpu_locks.sh@158 -- # waitforlisten 58424 /var/tmp/spdk.sock 00:06:24.009 20:57:37 -- common/autotest_common.sh@819 -- # '[' -z 58424 ']' 00:06:24.009 20:57:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.009 20:57:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.009 20:57:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.009 20:57:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.009 20:57:37 -- common/autotest_common.sh@10 -- # set +x 00:06:24.009 20:57:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.009 20:57:37 -- common/autotest_common.sh@852 -- # return 0 00:06:24.009 20:57:37 -- event/cpu_locks.sh@159 -- # waitforlisten 58450 /var/tmp/spdk2.sock 00:06:24.009 20:57:37 -- common/autotest_common.sh@819 -- # '[' -z 58450 ']' 00:06:24.009 20:57:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.009 20:57:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.009 20:57:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.009 20:57:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.009 20:57:37 -- common/autotest_common.sh@10 -- # set +x 00:06:24.268 20:57:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.268 20:57:38 -- common/autotest_common.sh@852 -- # return 0 00:06:24.268 20:57:38 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:24.268 20:57:38 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.268 20:57:38 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.268 20:57:38 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.268 ************************************ 00:06:24.268 END TEST locking_overlapped_coremask_via_rpc 00:06:24.268 ************************************ 00:06:24.268 00:06:24.268 real 0m4.739s 00:06:24.268 user 0m1.903s 00:06:24.268 sys 0m0.249s 00:06:24.268 20:57:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.268 20:57:38 -- common/autotest_common.sh@10 -- # set +x 00:06:24.527 20:57:38 -- event/cpu_locks.sh@174 -- # cleanup 00:06:24.527 20:57:38 -- event/cpu_locks.sh@15 -- # [[ -z 58424 ]] 00:06:24.527 20:57:38 -- event/cpu_locks.sh@15 -- # killprocess 58424 00:06:24.527 20:57:38 -- common/autotest_common.sh@926 -- # '[' -z 58424 ']' 00:06:24.527 20:57:38 -- common/autotest_common.sh@930 -- # kill -0 58424 00:06:24.527 20:57:38 -- common/autotest_common.sh@931 -- # uname 00:06:24.527 20:57:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:24.527 20:57:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58424 00:06:24.527 killing process with pid 58424 00:06:24.527 20:57:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:24.527 20:57:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:24.527 20:57:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58424' 00:06:24.527 20:57:38 -- common/autotest_common.sh@945 -- # kill 58424 00:06:24.527 20:57:38 -- common/autotest_common.sh@950 -- # wait 58424 00:06:26.432 20:57:40 -- event/cpu_locks.sh@16 -- # [[ -z 58450 ]] 00:06:26.432 20:57:40 -- event/cpu_locks.sh@16 -- # killprocess 58450 00:06:26.432 20:57:40 -- common/autotest_common.sh@926 -- # '[' -z 58450 ']' 00:06:26.432 20:57:40 -- common/autotest_common.sh@930 -- # kill -0 58450 00:06:26.432 20:57:40 -- common/autotest_common.sh@931 -- # uname 00:06:26.432 20:57:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.432 20:57:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58450 00:06:26.432 killing process with pid 58450 00:06:26.432 20:57:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:26.432 20:57:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:26.432 20:57:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58450' 00:06:26.432 20:57:40 -- common/autotest_common.sh@945 -- # kill 58450 00:06:26.432 20:57:40 -- common/autotest_common.sh@950 -- # wait 58450 00:06:28.336 20:57:42 -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.336 20:57:42 -- event/cpu_locks.sh@1 -- # cleanup 00:06:28.336 20:57:42 -- event/cpu_locks.sh@15 -- # [[ -z 58424 ]] 00:06:28.336 20:57:42 -- event/cpu_locks.sh@15 -- # killprocess 58424 00:06:28.336 20:57:42 -- common/autotest_common.sh@926 -- # '[' -z 58424 ']' 00:06:28.336 20:57:42 -- common/autotest_common.sh@930 -- # kill -0 58424 00:06:28.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58424) - No such process 00:06:28.336 Process with pid 58424 is not found 00:06:28.336 20:57:42 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58424 is not found' 00:06:28.336 20:57:42 -- event/cpu_locks.sh@16 -- # [[ -z 58450 ]] 00:06:28.336 20:57:42 -- event/cpu_locks.sh@16 -- # killprocess 58450 00:06:28.336 20:57:42 -- common/autotest_common.sh@926 -- # '[' -z 58450 ']' 00:06:28.336 20:57:42 -- common/autotest_common.sh@930 -- # kill -0 58450 00:06:28.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58450) - No such process 00:06:28.336 Process with pid 58450 is not found 00:06:28.336 20:57:42 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58450 is not found' 00:06:28.336 20:57:42 -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.336 00:06:28.336 real 0m45.452s 00:06:28.336 user 1m21.062s 00:06:28.336 sys 0m5.857s 00:06:28.336 20:57:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.336 ************************************ 00:06:28.336 20:57:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.336 END TEST cpu_locks 00:06:28.336 ************************************ 00:06:28.336 00:06:28.336 real 1m15.505s 00:06:28.336 user 2m19.553s 00:06:28.336 sys 0m9.248s 00:06:28.336 20:57:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.336 ************************************ 00:06:28.336 END TEST event 00:06:28.336 20:57:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.336 ************************************ 00:06:28.336 20:57:42 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:28.336 20:57:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.336 20:57:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.336 20:57:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.336 ************************************ 00:06:28.336 START TEST thread 00:06:28.336 ************************************ 00:06:28.336 20:57:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:28.336 * Looking for test storage... 00:06:28.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:28.336 20:57:42 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.336 20:57:42 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:28.336 20:57:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.336 20:57:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.336 ************************************ 00:06:28.336 START TEST thread_poller_perf 00:06:28.336 ************************************ 00:06:28.336 20:57:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.594 [2024-07-13 20:57:42.285669] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:28.594 [2024-07-13 20:57:42.285850] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58627 ] 00:06:28.594 [2024-07-13 20:57:42.456678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.852 [2024-07-13 20:57:42.669141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.852 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:30.284 ====================================== 00:06:30.284 busy:2214650814 (cyc) 00:06:30.284 total_run_count: 333000 00:06:30.284 tsc_hz: 2200000000 (cyc) 00:06:30.284 ====================================== 00:06:30.284 poller_cost: 6650 (cyc), 3022 (nsec) 00:06:30.284 00:06:30.284 real 0m1.771s 00:06:30.284 user 0m1.567s 00:06:30.284 sys 0m0.093s 00:06:30.284 20:57:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.284 20:57:44 -- common/autotest_common.sh@10 -- # set +x 00:06:30.284 ************************************ 00:06:30.284 END TEST thread_poller_perf 00:06:30.284 ************************************ 00:06:30.284 20:57:44 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:30.284 20:57:44 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:30.284 20:57:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.284 20:57:44 -- common/autotest_common.sh@10 -- # set +x 00:06:30.284 ************************************ 00:06:30.284 START TEST thread_poller_perf 00:06:30.284 ************************************ 00:06:30.284 20:57:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:30.284 [2024-07-13 20:57:44.112129] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:30.284 [2024-07-13 20:57:44.112305] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58669 ] 00:06:30.542 [2024-07-13 20:57:44.281717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.542 [2024-07-13 20:57:44.433641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.542 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:31.916 ====================================== 00:06:31.916 busy:2205104920 (cyc) 00:06:31.916 total_run_count: 4340000 00:06:31.917 tsc_hz: 2200000000 (cyc) 00:06:31.917 ====================================== 00:06:31.917 poller_cost: 508 (cyc), 230 (nsec) 00:06:31.917 00:06:31.917 real 0m1.686s 00:06:31.917 user 0m1.472s 00:06:31.917 sys 0m0.105s 00:06:31.917 20:57:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.917 20:57:45 -- common/autotest_common.sh@10 -- # set +x 00:06:31.917 ************************************ 00:06:31.917 END TEST thread_poller_perf 00:06:31.917 ************************************ 00:06:31.917 20:57:45 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:31.917 00:06:31.917 real 0m3.652s 00:06:31.917 user 0m3.117s 00:06:31.917 sys 0m0.303s 00:06:31.917 20:57:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.917 20:57:45 -- common/autotest_common.sh@10 -- # set +x 00:06:31.917 ************************************ 00:06:31.917 END TEST thread 00:06:31.917 ************************************ 00:06:32.175 20:57:45 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:32.175 20:57:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.175 20:57:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.175 20:57:45 -- common/autotest_common.sh@10 -- # set +x 00:06:32.175 /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat: trap: line 2: unexpected EOF while looking for matching `)' 00:06:32.175 ************************************ 00:06:32.175 START TEST accel 00:06:32.175 ************************************ 00:06:32.175 20:57:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:32.175 * Looking for test storage... 00:06:32.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:32.175 20:57:45 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:32.175 20:57:45 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:32.175 20:57:45 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:32.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.175 20:57:45 -- accel/accel.sh@59 -- # spdk_tgt_pid=58749 00:06:32.175 20:57:45 -- accel/accel.sh@60 -- # waitforlisten 58749 00:06:32.175 20:57:45 -- common/autotest_common.sh@819 -- # '[' -z 58749 ']' 00:06:32.175 20:57:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.175 20:57:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.176 20:57:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.176 20:57:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.176 20:57:45 -- common/autotest_common.sh@10 -- # set +x 00:06:32.176 20:57:45 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:32.176 20:57:45 -- accel/accel.sh@58 -- # build_accel_config 00:06:32.176 20:57:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.176 20:57:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.176 20:57:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.176 20:57:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.176 20:57:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.176 20:57:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.176 20:57:45 -- accel/accel.sh@42 -- # jq -r . 00:06:32.176 [2024-07-13 20:57:46.053637] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:32.176 [2024-07-13 20:57:46.053854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58749 ] 00:06:32.434 [2024-07-13 20:57:46.228012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.692 [2024-07-13 20:57:46.447969] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.692 [2024-07-13 20:57:46.448232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.070 20:57:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.070 20:57:47 -- common/autotest_common.sh@852 -- # return 0 00:06:34.070 20:57:47 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:34.070 20:57:47 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:34.070 20:57:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:34.070 20:57:47 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:34.070 20:57:47 -- common/autotest_common.sh@10 -- # set +x 00:06:34.070 20:57:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # IFS== 00:06:34.070 20:57:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:34.070 20:57:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:34.070 20:57:47 -- accel/accel.sh@67 -- # killprocess 58749 00:06:34.070 20:57:47 -- common/autotest_common.sh@926 -- # '[' -z 58749 ']' 00:06:34.070 20:57:47 -- common/autotest_common.sh@930 -- # kill -0 58749 00:06:34.070 20:57:47 -- common/autotest_common.sh@931 -- # uname 00:06:34.070 20:57:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.070 20:57:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58749 00:06:34.070 killing process with pid 58749 00:06:34.070 20:57:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.070 20:57:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.070 20:57:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58749' 00:06:34.070 20:57:47 -- common/autotest_common.sh@945 -- # kill 58749 00:06:34.070 20:57:47 -- common/autotest_common.sh@950 -- # wait 58749 00:06:35.997 20:57:49 -- accel/accel.sh@68 -- # trap - ERR 00:06:35.997 20:57:49 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:35.997 20:57:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:35.997 20:57:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.997 20:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.997 20:57:49 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:35.997 20:57:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:35.997 20:57:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.997 20:57:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.997 20:57:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.997 20:57:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.997 20:57:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.997 20:57:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.997 20:57:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.997 20:57:49 -- accel/accel.sh@42 -- # jq -r . 00:06:35.997 20:57:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.997 20:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.997 20:57:49 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:35.997 20:57:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:35.997 20:57:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.997 20:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.997 ************************************ 00:06:35.997 START TEST accel_missing_filename 00:06:35.997 ************************************ 00:06:35.997 20:57:49 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:35.997 20:57:49 -- common/autotest_common.sh@640 -- # local es=0 00:06:35.997 20:57:49 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:35.997 20:57:49 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:35.997 20:57:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.997 20:57:49 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:35.997 20:57:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.997 20:57:49 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:35.997 20:57:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:35.997 20:57:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.997 20:57:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.997 20:57:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.997 20:57:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.997 20:57:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.997 20:57:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.997 20:57:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.997 20:57:49 -- accel/accel.sh@42 -- # jq -r . 00:06:35.997 [2024-07-13 20:57:49.848633] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:35.997 [2024-07-13 20:57:49.848934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58821 ] 00:06:36.256 [2024-07-13 20:57:50.030529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.515 [2024-07-13 20:57:50.192809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.515 [2024-07-13 20:57:50.350810] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.082 [2024-07-13 20:57:50.722806] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:37.341 A filename is required. 00:06:37.341 20:57:51 -- common/autotest_common.sh@643 -- # es=234 00:06:37.341 20:57:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:37.341 20:57:51 -- common/autotest_common.sh@652 -- # es=106 00:06:37.341 20:57:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:37.341 20:57:51 -- common/autotest_common.sh@660 -- # es=1 00:06:37.341 20:57:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:37.341 00:06:37.341 real 0m1.255s 00:06:37.341 user 0m1.021s 00:06:37.341 sys 0m0.175s 00:06:37.341 ************************************ 00:06:37.341 END TEST accel_missing_filename 00:06:37.341 ************************************ 00:06:37.341 20:57:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.341 20:57:51 -- common/autotest_common.sh@10 -- # set +x 00:06:37.341 20:57:51 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.341 20:57:51 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:37.341 20:57:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.341 20:57:51 -- common/autotest_common.sh@10 -- # set +x 00:06:37.341 ************************************ 00:06:37.341 START TEST accel_compress_verify 00:06:37.341 ************************************ 00:06:37.341 20:57:51 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.341 20:57:51 -- common/autotest_common.sh@640 -- # local es=0 00:06:37.341 20:57:51 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.341 20:57:51 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:37.341 20:57:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.341 20:57:51 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:37.341 20:57:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.341 20:57:51 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.341 20:57:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.341 20:57:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.341 20:57:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.341 20:57:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.341 20:57:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.341 20:57:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.341 20:57:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.341 20:57:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.341 20:57:51 -- accel/accel.sh@42 -- # jq -r . 00:06:37.341 [2024-07-13 20:57:51.135491] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:37.341 [2024-07-13 20:57:51.135666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58858 ] 00:06:37.600 [2024-07-13 20:57:51.308450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.600 [2024-07-13 20:57:51.461048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.859 [2024-07-13 20:57:51.605452] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.117 [2024-07-13 20:57:51.979141] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:38.377 00:06:38.377 Compression does not support the verify option, aborting. 00:06:38.377 ************************************ 00:06:38.377 END TEST accel_compress_verify 00:06:38.377 ************************************ 00:06:38.377 20:57:52 -- common/autotest_common.sh@643 -- # es=161 00:06:38.377 20:57:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.377 20:57:52 -- common/autotest_common.sh@652 -- # es=33 00:06:38.377 20:57:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:38.377 20:57:52 -- common/autotest_common.sh@660 -- # es=1 00:06:38.377 20:57:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.377 00:06:38.377 real 0m1.208s 00:06:38.377 user 0m1.009s 00:06:38.377 sys 0m0.142s 00:06:38.377 20:57:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.377 20:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.636 20:57:52 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:38.636 20:57:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:38.636 20:57:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.636 20:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.636 ************************************ 00:06:38.636 START TEST accel_wrong_workload 00:06:38.636 ************************************ 00:06:38.636 20:57:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:38.636 20:57:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.636 20:57:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:38.636 20:57:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:38.636 20:57:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.636 20:57:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:38.636 20:57:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.636 20:57:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:38.636 20:57:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:38.636 20:57:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.636 20:57:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.636 20:57:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.636 20:57:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.636 20:57:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.636 20:57:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.636 20:57:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.636 20:57:52 -- accel/accel.sh@42 -- # jq -r . 00:06:38.636 Unsupported workload type: foobar 00:06:38.636 [2024-07-13 20:57:52.381323] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:38.636 accel_perf options: 00:06:38.637 [-h help message] 00:06:38.637 [-q queue depth per core] 00:06:38.637 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:38.637 [-T number of threads per core 00:06:38.637 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:38.637 [-t time in seconds] 00:06:38.637 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:38.637 [ dif_verify, , dif_generate, dif_generate_copy 00:06:38.637 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:38.637 [-l for compress/decompress workloads, name of uncompressed input file 00:06:38.637 [-S for crc32c workload, use this seed value (default 0) 00:06:38.637 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:38.637 [-f for fill workload, use this BYTE value (default 255) 00:06:38.637 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:38.637 [-y verify result if this switch is on] 00:06:38.637 [-a tasks to allocate per core (default: same value as -q)] 00:06:38.637 Can be used to spread operations across a wider range of memory. 00:06:38.637 20:57:52 -- common/autotest_common.sh@643 -- # es=1 00:06:38.637 20:57:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.637 20:57:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:38.637 20:57:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.637 00:06:38.637 real 0m0.069s 00:06:38.637 user 0m0.084s 00:06:38.637 sys 0m0.034s 00:06:38.637 20:57:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.637 20:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.637 ************************************ 00:06:38.637 END TEST accel_wrong_workload 00:06:38.637 ************************************ 00:06:38.637 20:57:52 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:38.637 20:57:52 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:38.637 20:57:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.637 20:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.637 ************************************ 00:06:38.637 START TEST accel_negative_buffers 00:06:38.637 ************************************ 00:06:38.637 20:57:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:38.637 20:57:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.637 20:57:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:38.637 20:57:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:38.637 20:57:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.637 20:57:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:38.637 20:57:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.637 20:57:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:38.637 20:57:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:38.637 20:57:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.637 20:57:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.637 20:57:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.637 20:57:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.637 20:57:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.637 20:57:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.637 20:57:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.637 20:57:52 -- accel/accel.sh@42 -- # jq -r . 00:06:38.637 -x option must be non-negative. 00:06:38.637 [2024-07-13 20:57:52.502568] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:38.637 accel_perf options: 00:06:38.637 [-h help message] 00:06:38.637 [-q queue depth per core] 00:06:38.637 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:38.637 [-T number of threads per core 00:06:38.637 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:38.637 [-t time in seconds] 00:06:38.637 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:38.637 [ dif_verify, , dif_generate, dif_generate_copy 00:06:38.637 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:38.637 [-l for compress/decompress workloads, name of uncompressed input file 00:06:38.637 [-S for crc32c workload, use this seed value (default 0) 00:06:38.637 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:38.637 [-f for fill workload, use this BYTE value (default 255) 00:06:38.637 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:38.637 [-y verify result if this switch is on] 00:06:38.637 [-a tasks to allocate per core (default: same value as -q)] 00:06:38.637 Can be used to spread operations across a wider range of memory. 00:06:38.637 20:57:52 -- common/autotest_common.sh@643 -- # es=1 00:06:38.637 20:57:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.637 20:57:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:38.637 20:57:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.637 00:06:38.637 real 0m0.073s 00:06:38.637 user 0m0.089s 00:06:38.637 sys 0m0.034s 00:06:38.637 20:57:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.637 ************************************ 00:06:38.637 END TEST accel_negative_buffers 00:06:38.637 ************************************ 00:06:38.637 20:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.896 20:57:52 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:38.896 20:57:52 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:38.896 20:57:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.896 20:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.896 ************************************ 00:06:38.896 START TEST accel_crc32c 00:06:38.896 ************************************ 00:06:38.896 20:57:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:38.896 20:57:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.896 20:57:52 -- accel/accel.sh@17 -- # local accel_module 00:06:38.896 20:57:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:38.896 20:57:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:38.896 20:57:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.896 20:57:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.896 20:57:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.896 20:57:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.896 20:57:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.896 20:57:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.896 20:57:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.896 20:57:52 -- accel/accel.sh@42 -- # jq -r . 00:06:38.896 [2024-07-13 20:57:52.619175] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:38.896 [2024-07-13 20:57:52.619317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58930 ] 00:06:38.896 [2024-07-13 20:57:52.767126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.155 [2024-07-13 20:57:52.916385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.058 20:57:54 -- accel/accel.sh@18 -- # out=' 00:06:41.058 SPDK Configuration: 00:06:41.058 Core mask: 0x1 00:06:41.058 00:06:41.058 Accel Perf Configuration: 00:06:41.058 Workload Type: crc32c 00:06:41.058 CRC-32C seed: 32 00:06:41.058 Transfer size: 4096 bytes 00:06:41.058 Vector count 1 00:06:41.058 Module: software 00:06:41.058 Queue depth: 32 00:06:41.058 Allocate depth: 32 00:06:41.058 # threads/core: 1 00:06:41.058 Run time: 1 seconds 00:06:41.058 Verify: Yes 00:06:41.058 00:06:41.058 Running for 1 seconds... 00:06:41.058 00:06:41.058 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.058 ------------------------------------------------------------------------------------ 00:06:41.058 0,0 469184/s 1832 MiB/s 0 0 00:06:41.058 ==================================================================================== 00:06:41.058 Total 469184/s 1832 MiB/s 0 0' 00:06:41.058 20:57:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.058 20:57:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.058 20:57:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:41.058 20:57:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:41.058 20:57:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.058 20:57:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.058 20:57:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.058 20:57:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.058 20:57:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.058 20:57:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.058 20:57:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.058 20:57:54 -- accel/accel.sh@42 -- # jq -r . 00:06:41.058 [2024-07-13 20:57:54.817976] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:41.058 [2024-07-13 20:57:54.818148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58956 ] 00:06:41.317 [2024-07-13 20:57:54.984583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.317 [2024-07-13 20:57:55.143473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val= 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val= 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val=0x1 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val= 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val= 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val=crc32c 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val=32 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val= 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val=software 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val=32 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val=32 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val=1 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val=Yes 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val= 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.578 20:57:55 -- accel/accel.sh@21 -- # val= 00:06:41.578 20:57:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.578 20:57:55 -- accel/accel.sh@20 -- # read -r var val 00:06:43.482 20:57:56 -- accel/accel.sh@21 -- # val= 00:06:43.482 20:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.482 20:57:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.482 20:57:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.482 20:57:56 -- accel/accel.sh@21 -- # val= 00:06:43.482 20:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.482 20:57:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.482 20:57:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.482 20:57:56 -- accel/accel.sh@21 -- # val= 00:06:43.482 20:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.482 20:57:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.482 20:57:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.482 20:57:56 -- accel/accel.sh@21 -- # val= 00:06:43.482 20:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.482 20:57:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.482 20:57:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.482 20:57:56 -- accel/accel.sh@21 -- # val= 00:06:43.482 20:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.482 20:57:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.482 20:57:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.482 20:57:56 -- accel/accel.sh@21 -- # val= 00:06:43.482 20:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.482 20:57:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.482 20:57:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.482 20:57:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.482 20:57:56 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:43.482 20:57:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.482 00:06:43.482 real 0m4.400s 00:06:43.482 user 0m3.919s 00:06:43.482 sys 0m0.272s 00:06:43.482 ************************************ 00:06:43.482 END TEST accel_crc32c 00:06:43.482 ************************************ 00:06:43.482 20:57:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.482 20:57:56 -- common/autotest_common.sh@10 -- # set +x 00:06:43.482 20:57:57 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:43.482 20:57:57 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:43.482 20:57:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.482 20:57:57 -- common/autotest_common.sh@10 -- # set +x 00:06:43.482 ************************************ 00:06:43.482 START TEST accel_crc32c_C2 00:06:43.482 ************************************ 00:06:43.482 20:57:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:43.482 20:57:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.482 20:57:57 -- accel/accel.sh@17 -- # local accel_module 00:06:43.482 20:57:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:43.482 20:57:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:43.482 20:57:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.482 20:57:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.482 20:57:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.482 20:57:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.482 20:57:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.482 20:57:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.482 20:57:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.482 20:57:57 -- accel/accel.sh@42 -- # jq -r . 00:06:43.482 [2024-07-13 20:57:57.074546] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:43.482 [2024-07-13 20:57:57.074683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58997 ] 00:06:43.482 [2024-07-13 20:57:57.226920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.482 [2024-07-13 20:57:57.377388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.411 20:57:59 -- accel/accel.sh@18 -- # out=' 00:06:45.411 SPDK Configuration: 00:06:45.411 Core mask: 0x1 00:06:45.411 00:06:45.411 Accel Perf Configuration: 00:06:45.411 Workload Type: crc32c 00:06:45.411 CRC-32C seed: 0 00:06:45.411 Transfer size: 4096 bytes 00:06:45.411 Vector count 2 00:06:45.411 Module: software 00:06:45.411 Queue depth: 32 00:06:45.411 Allocate depth: 32 00:06:45.411 # threads/core: 1 00:06:45.411 Run time: 1 seconds 00:06:45.411 Verify: Yes 00:06:45.411 00:06:45.411 Running for 1 seconds... 00:06:45.411 00:06:45.411 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.411 ------------------------------------------------------------------------------------ 00:06:45.411 0,0 366432/s 2862 MiB/s 0 0 00:06:45.411 ==================================================================================== 00:06:45.411 Total 366432/s 1431 MiB/s 0 0' 00:06:45.411 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.411 20:57:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:45.411 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.411 20:57:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:45.411 20:57:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.411 20:57:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.411 20:57:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.411 20:57:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.411 20:57:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.411 20:57:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.411 20:57:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.411 20:57:59 -- accel/accel.sh@42 -- # jq -r . 00:06:45.411 [2024-07-13 20:57:59.274754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:45.411 [2024-07-13 20:57:59.274958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59029 ] 00:06:45.669 [2024-07-13 20:57:59.445730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.928 [2024-07-13 20:57:59.596150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.928 20:57:59 -- accel/accel.sh@21 -- # val= 00:06:45.928 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.928 20:57:59 -- accel/accel.sh@21 -- # val= 00:06:45.928 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.928 20:57:59 -- accel/accel.sh@21 -- # val=0x1 00:06:45.928 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.928 20:57:59 -- accel/accel.sh@21 -- # val= 00:06:45.928 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.928 20:57:59 -- accel/accel.sh@21 -- # val= 00:06:45.928 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.928 20:57:59 -- accel/accel.sh@21 -- # val=crc32c 00:06:45.928 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.928 20:57:59 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.928 20:57:59 -- accel/accel.sh@21 -- # val=0 00:06:45.928 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.928 20:57:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.928 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.928 20:57:59 -- accel/accel.sh@21 -- # val= 00:06:45.928 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.928 20:57:59 -- accel/accel.sh@21 -- # val=software 00:06:45.928 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.928 20:57:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.928 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.929 20:57:59 -- accel/accel.sh@21 -- # val=32 00:06:45.929 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.929 20:57:59 -- accel/accel.sh@21 -- # val=32 00:06:45.929 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.929 20:57:59 -- accel/accel.sh@21 -- # val=1 00:06:45.929 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.929 20:57:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.929 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.929 20:57:59 -- accel/accel.sh@21 -- # val=Yes 00:06:45.929 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.929 20:57:59 -- accel/accel.sh@21 -- # val= 00:06:45.929 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.929 20:57:59 -- accel/accel.sh@21 -- # val= 00:06:45.929 20:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.929 20:57:59 -- accel/accel.sh@20 -- # read -r var val 00:06:47.831 20:58:01 -- accel/accel.sh@21 -- # val= 00:06:47.831 20:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.831 20:58:01 -- accel/accel.sh@20 -- # IFS=: 00:06:47.831 20:58:01 -- accel/accel.sh@20 -- # read -r var val 00:06:47.831 20:58:01 -- accel/accel.sh@21 -- # val= 00:06:47.831 20:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.831 20:58:01 -- accel/accel.sh@20 -- # IFS=: 00:06:47.831 20:58:01 -- accel/accel.sh@20 -- # read -r var val 00:06:47.831 20:58:01 -- accel/accel.sh@21 -- # val= 00:06:47.831 20:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.831 20:58:01 -- accel/accel.sh@20 -- # IFS=: 00:06:47.831 20:58:01 -- accel/accel.sh@20 -- # read -r var val 00:06:47.831 20:58:01 -- accel/accel.sh@21 -- # val= 00:06:47.831 20:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.831 20:58:01 -- accel/accel.sh@20 -- # IFS=: 00:06:47.831 20:58:01 -- accel/accel.sh@20 -- # read -r var val 00:06:47.831 20:58:01 -- accel/accel.sh@21 -- # val= 00:06:47.831 20:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.831 20:58:01 -- accel/accel.sh@20 -- # IFS=: 00:06:47.831 20:58:01 -- accel/accel.sh@20 -- # read -r var val 00:06:47.831 20:58:01 -- accel/accel.sh@21 -- # val= 00:06:47.831 20:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.831 20:58:01 -- accel/accel.sh@20 -- # IFS=: 00:06:47.831 20:58:01 -- accel/accel.sh@20 -- # read -r var val 00:06:47.831 20:58:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.831 20:58:01 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:47.831 ************************************ 00:06:47.831 END TEST accel_crc32c_C2 00:06:47.831 ************************************ 00:06:47.831 20:58:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.831 00:06:47.831 real 0m4.415s 00:06:47.831 user 0m3.927s 00:06:47.831 sys 0m0.282s 00:06:47.831 20:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.831 20:58:01 -- common/autotest_common.sh@10 -- # set +x 00:06:47.831 20:58:01 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:47.831 20:58:01 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:47.831 20:58:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.831 20:58:01 -- common/autotest_common.sh@10 -- # set +x 00:06:47.831 ************************************ 00:06:47.831 START TEST accel_copy 00:06:47.831 ************************************ 00:06:47.831 20:58:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:47.831 20:58:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.831 20:58:01 -- accel/accel.sh@17 -- # local accel_module 00:06:47.831 20:58:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:47.831 20:58:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:47.831 20:58:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.831 20:58:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.831 20:58:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.831 20:58:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.831 20:58:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.831 20:58:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.831 20:58:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.831 20:58:01 -- accel/accel.sh@42 -- # jq -r . 00:06:47.831 [2024-07-13 20:58:01.560051] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:47.831 [2024-07-13 20:58:01.560224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59070 ] 00:06:47.831 [2024-07-13 20:58:01.729908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.091 [2024-07-13 20:58:01.885594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.004 20:58:03 -- accel/accel.sh@18 -- # out=' 00:06:50.004 SPDK Configuration: 00:06:50.004 Core mask: 0x1 00:06:50.004 00:06:50.004 Accel Perf Configuration: 00:06:50.004 Workload Type: copy 00:06:50.004 Transfer size: 4096 bytes 00:06:50.004 Vector count 1 00:06:50.004 Module: software 00:06:50.004 Queue depth: 32 00:06:50.004 Allocate depth: 32 00:06:50.004 # threads/core: 1 00:06:50.004 Run time: 1 seconds 00:06:50.004 Verify: Yes 00:06:50.004 00:06:50.004 Running for 1 seconds... 00:06:50.004 00:06:50.004 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.004 ------------------------------------------------------------------------------------ 00:06:50.004 0,0 279712/s 1092 MiB/s 0 0 00:06:50.004 ==================================================================================== 00:06:50.004 Total 279712/s 1092 MiB/s 0 0' 00:06:50.004 20:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.004 20:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.004 20:58:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:50.004 20:58:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:50.004 20:58:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.004 20:58:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.004 20:58:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.004 20:58:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.004 20:58:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.004 20:58:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.004 20:58:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.004 20:58:03 -- accel/accel.sh@42 -- # jq -r . 00:06:50.004 [2024-07-13 20:58:03.800954] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:50.004 [2024-07-13 20:58:03.801127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59096 ] 00:06:50.263 [2024-07-13 20:58:03.971237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.263 [2024-07-13 20:58:04.119060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val= 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val= 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val=0x1 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val= 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val= 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val=copy 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val= 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val=software 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val=32 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val=32 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val=1 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val=Yes 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val= 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:50.522 20:58:04 -- accel/accel.sh@21 -- # val= 00:06:50.522 20:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # IFS=: 00:06:50.522 20:58:04 -- accel/accel.sh@20 -- # read -r var val 00:06:52.426 20:58:05 -- accel/accel.sh@21 -- # val= 00:06:52.426 20:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.426 20:58:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.426 20:58:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.426 20:58:05 -- accel/accel.sh@21 -- # val= 00:06:52.426 20:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.426 20:58:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.426 20:58:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.426 20:58:05 -- accel/accel.sh@21 -- # val= 00:06:52.426 20:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.426 20:58:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.426 20:58:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.426 20:58:05 -- accel/accel.sh@21 -- # val= 00:06:52.426 20:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.426 20:58:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.426 20:58:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.426 20:58:05 -- accel/accel.sh@21 -- # val= 00:06:52.426 20:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.426 20:58:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.426 20:58:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.426 20:58:05 -- accel/accel.sh@21 -- # val= 00:06:52.426 20:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.426 20:58:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.426 20:58:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.426 20:58:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.427 20:58:05 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:52.427 20:58:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.427 00:06:52.427 real 0m4.443s 00:06:52.427 user 0m3.956s 00:06:52.427 sys 0m0.276s 00:06:52.427 20:58:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.427 ************************************ 00:06:52.427 END TEST accel_copy 00:06:52.427 ************************************ 00:06:52.427 20:58:05 -- common/autotest_common.sh@10 -- # set +x 00:06:52.427 20:58:05 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.427 20:58:05 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:52.427 20:58:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.427 20:58:05 -- common/autotest_common.sh@10 -- # set +x 00:06:52.427 ************************************ 00:06:52.427 START TEST accel_fill 00:06:52.427 ************************************ 00:06:52.427 20:58:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.427 20:58:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.427 20:58:05 -- accel/accel.sh@17 -- # local accel_module 00:06:52.427 20:58:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.427 20:58:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.427 20:58:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.427 20:58:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.427 20:58:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.427 20:58:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.427 20:58:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.427 20:58:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.427 20:58:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.427 20:58:06 -- accel/accel.sh@42 -- # jq -r . 00:06:52.427 [2024-07-13 20:58:06.054899] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:52.427 [2024-07-13 20:58:06.055111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59141 ] 00:06:52.427 [2024-07-13 20:58:06.226652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.685 [2024-07-13 20:58:06.387502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.589 20:58:08 -- accel/accel.sh@18 -- # out=' 00:06:54.589 SPDK Configuration: 00:06:54.589 Core mask: 0x1 00:06:54.589 00:06:54.589 Accel Perf Configuration: 00:06:54.589 Workload Type: fill 00:06:54.589 Fill pattern: 0x80 00:06:54.589 Transfer size: 4096 bytes 00:06:54.589 Vector count 1 00:06:54.589 Module: software 00:06:54.589 Queue depth: 64 00:06:54.589 Allocate depth: 64 00:06:54.589 # threads/core: 1 00:06:54.589 Run time: 1 seconds 00:06:54.589 Verify: Yes 00:06:54.589 00:06:54.589 Running for 1 seconds... 00:06:54.589 00:06:54.589 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.589 ------------------------------------------------------------------------------------ 00:06:54.589 0,0 462592/s 1807 MiB/s 0 0 00:06:54.589 ==================================================================================== 00:06:54.589 Total 462592/s 1807 MiB/s 0 0' 00:06:54.589 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.589 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.589 20:58:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.589 20:58:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.589 20:58:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.589 20:58:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.589 20:58:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.589 20:58:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.589 20:58:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.589 20:58:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.589 20:58:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.589 20:58:08 -- accel/accel.sh@42 -- # jq -r . 00:06:54.589 [2024-07-13 20:58:08.252030] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:54.589 [2024-07-13 20:58:08.252195] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:06:54.589 [2024-07-13 20:58:08.413372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.848 [2024-07-13 20:58:08.566518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val= 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val= 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val=0x1 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val= 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val= 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val=fill 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val=0x80 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val= 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val=software 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val=64 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val=64 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val=1 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val=Yes 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val= 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:54.848 20:58:08 -- accel/accel.sh@21 -- # val= 00:06:54.848 20:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # IFS=: 00:06:54.848 20:58:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.753 20:58:10 -- accel/accel.sh@21 -- # val= 00:06:56.753 20:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.753 20:58:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.753 20:58:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.753 20:58:10 -- accel/accel.sh@21 -- # val= 00:06:56.753 20:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.753 20:58:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.753 20:58:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.753 20:58:10 -- accel/accel.sh@21 -- # val= 00:06:56.753 20:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.753 20:58:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.753 20:58:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.753 20:58:10 -- accel/accel.sh@21 -- # val= 00:06:56.753 20:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.753 20:58:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.753 20:58:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.753 20:58:10 -- accel/accel.sh@21 -- # val= 00:06:56.753 20:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.753 20:58:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.753 20:58:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.753 20:58:10 -- accel/accel.sh@21 -- # val= 00:06:56.753 20:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.753 20:58:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.753 20:58:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.753 ************************************ 00:06:56.753 END TEST accel_fill 00:06:56.753 ************************************ 00:06:56.753 20:58:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.753 20:58:10 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:56.753 20:58:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.753 00:06:56.753 real 0m4.425s 00:06:56.753 user 0m3.947s 00:06:56.753 sys 0m0.274s 00:06:56.753 20:58:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.753 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:06:56.753 20:58:10 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:56.753 20:58:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:56.753 20:58:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.753 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:06:56.753 ************************************ 00:06:56.753 START TEST accel_copy_crc32c 00:06:56.753 ************************************ 00:06:56.753 20:58:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:56.753 20:58:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.753 20:58:10 -- accel/accel.sh@17 -- # local accel_module 00:06:56.754 20:58:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:56.754 20:58:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:56.754 20:58:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.754 20:58:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.754 20:58:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.754 20:58:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.754 20:58:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.754 20:58:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.754 20:58:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.754 20:58:10 -- accel/accel.sh@42 -- # jq -r . 00:06:56.754 [2024-07-13 20:58:10.539095] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:56.754 [2024-07-13 20:58:10.539274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59215 ] 00:06:57.012 [2024-07-13 20:58:10.709426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.012 [2024-07-13 20:58:10.858469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.920 20:58:12 -- accel/accel.sh@18 -- # out=' 00:06:58.920 SPDK Configuration: 00:06:58.920 Core mask: 0x1 00:06:58.920 00:06:58.920 Accel Perf Configuration: 00:06:58.920 Workload Type: copy_crc32c 00:06:58.920 CRC-32C seed: 0 00:06:58.920 Vector size: 4096 bytes 00:06:58.920 Transfer size: 4096 bytes 00:06:58.920 Vector count 1 00:06:58.920 Module: software 00:06:58.920 Queue depth: 32 00:06:58.920 Allocate depth: 32 00:06:58.920 # threads/core: 1 00:06:58.920 Run time: 1 seconds 00:06:58.920 Verify: Yes 00:06:58.920 00:06:58.920 Running for 1 seconds... 00:06:58.920 00:06:58.920 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.920 ------------------------------------------------------------------------------------ 00:06:58.920 0,0 233056/s 910 MiB/s 0 0 00:06:58.920 ==================================================================================== 00:06:58.920 Total 233056/s 910 MiB/s 0 0' 00:06:58.920 20:58:12 -- accel/accel.sh@20 -- # IFS=: 00:06:58.920 20:58:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.920 20:58:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:58.920 20:58:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:58.920 20:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.920 20:58:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.920 20:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.920 20:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.920 20:58:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.920 20:58:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.920 20:58:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.921 20:58:12 -- accel/accel.sh@42 -- # jq -r . 00:06:58.921 [2024-07-13 20:58:12.764588] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:58.921 [2024-07-13 20:58:12.764806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59241 ] 00:06:59.190 [2024-07-13 20:58:12.932690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.190 [2024-07-13 20:58:13.077886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val= 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val= 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val=0x1 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val= 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val= 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val=0 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val= 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val=software 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val=32 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val=32 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val=1 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val=Yes 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val= 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:06:59.459 20:58:13 -- accel/accel.sh@21 -- # val= 00:06:59.459 20:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # IFS=: 00:06:59.459 20:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:01.362 20:58:14 -- accel/accel.sh@21 -- # val= 00:07:01.362 20:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.362 20:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.362 20:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.362 20:58:14 -- accel/accel.sh@21 -- # val= 00:07:01.362 20:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.362 20:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.362 20:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.362 20:58:14 -- accel/accel.sh@21 -- # val= 00:07:01.362 20:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.362 20:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.362 20:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.362 20:58:14 -- accel/accel.sh@21 -- # val= 00:07:01.362 20:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.362 20:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.362 20:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.362 20:58:14 -- accel/accel.sh@21 -- # val= 00:07:01.362 20:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.362 20:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.362 20:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.362 20:58:14 -- accel/accel.sh@21 -- # val= 00:07:01.362 20:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.362 20:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.362 20:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.362 20:58:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.362 ************************************ 00:07:01.362 END TEST accel_copy_crc32c 00:07:01.362 ************************************ 00:07:01.362 20:58:14 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:01.362 20:58:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.362 00:07:01.362 real 0m4.445s 00:07:01.362 user 0m3.939s 00:07:01.362 sys 0m0.300s 00:07:01.362 20:58:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.362 20:58:14 -- common/autotest_common.sh@10 -- # set +x 00:07:01.362 20:58:14 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:01.362 20:58:14 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:01.362 20:58:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.362 20:58:14 -- common/autotest_common.sh@10 -- # set +x 00:07:01.362 ************************************ 00:07:01.362 START TEST accel_copy_crc32c_C2 00:07:01.362 ************************************ 00:07:01.362 20:58:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:01.362 20:58:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.362 20:58:14 -- accel/accel.sh@17 -- # local accel_module 00:07:01.362 20:58:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:01.362 20:58:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:01.362 20:58:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.362 20:58:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.362 20:58:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.362 20:58:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.362 20:58:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.362 20:58:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.362 20:58:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.362 20:58:14 -- accel/accel.sh@42 -- # jq -r . 00:07:01.362 [2024-07-13 20:58:15.033020] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:01.362 [2024-07-13 20:58:15.033166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59282 ] 00:07:01.362 [2024-07-13 20:58:15.201403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.621 [2024-07-13 20:58:15.347513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.526 20:58:17 -- accel/accel.sh@18 -- # out=' 00:07:03.526 SPDK Configuration: 00:07:03.526 Core mask: 0x1 00:07:03.526 00:07:03.526 Accel Perf Configuration: 00:07:03.526 Workload Type: copy_crc32c 00:07:03.526 CRC-32C seed: 0 00:07:03.526 Vector size: 4096 bytes 00:07:03.526 Transfer size: 8192 bytes 00:07:03.526 Vector count 2 00:07:03.526 Module: software 00:07:03.526 Queue depth: 32 00:07:03.526 Allocate depth: 32 00:07:03.526 # threads/core: 1 00:07:03.526 Run time: 1 seconds 00:07:03.526 Verify: Yes 00:07:03.526 00:07:03.526 Running for 1 seconds... 00:07:03.526 00:07:03.526 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.526 ------------------------------------------------------------------------------------ 00:07:03.526 0,0 170240/s 1330 MiB/s 0 0 00:07:03.526 ==================================================================================== 00:07:03.526 Total 170240/s 665 MiB/s 0 0' 00:07:03.526 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.526 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.526 20:58:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:03.526 20:58:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:03.526 20:58:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.526 20:58:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.526 20:58:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.526 20:58:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.526 20:58:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.526 20:58:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.526 20:58:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.526 20:58:17 -- accel/accel.sh@42 -- # jq -r . 00:07:03.526 [2024-07-13 20:58:17.245726] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:03.526 [2024-07-13 20:58:17.246732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59308 ] 00:07:03.526 [2024-07-13 20:58:17.413254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.784 [2024-07-13 20:58:17.572598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val= 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val= 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val=0x1 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val= 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val= 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val=0 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val= 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val=software 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val=32 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val=32 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val=1 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val=Yes 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val= 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.044 20:58:17 -- accel/accel.sh@21 -- # val= 00:07:04.044 20:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.044 20:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.948 20:58:19 -- accel/accel.sh@21 -- # val= 00:07:05.948 20:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.948 20:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:05.948 20:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:05.948 20:58:19 -- accel/accel.sh@21 -- # val= 00:07:05.948 20:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.948 20:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:05.948 20:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:05.948 20:58:19 -- accel/accel.sh@21 -- # val= 00:07:05.948 20:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.948 20:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:05.948 20:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:05.948 20:58:19 -- accel/accel.sh@21 -- # val= 00:07:05.948 20:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.948 20:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:05.948 20:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:05.948 20:58:19 -- accel/accel.sh@21 -- # val= 00:07:05.948 20:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.948 20:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:05.948 20:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:05.948 20:58:19 -- accel/accel.sh@21 -- # val= 00:07:05.948 20:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.948 20:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:05.948 20:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:05.948 20:58:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.948 20:58:19 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:05.948 20:58:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.948 00:07:05.948 real 0m4.469s 00:07:05.948 user 0m3.978s 00:07:05.948 sys 0m0.284s 00:07:05.948 20:58:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.948 20:58:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.948 ************************************ 00:07:05.948 END TEST accel_copy_crc32c_C2 00:07:05.948 ************************************ 00:07:05.948 20:58:19 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:05.948 20:58:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:05.948 20:58:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.948 20:58:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.948 ************************************ 00:07:05.948 START TEST accel_dualcast 00:07:05.948 ************************************ 00:07:05.948 20:58:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:05.948 20:58:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.948 20:58:19 -- accel/accel.sh@17 -- # local accel_module 00:07:05.948 20:58:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:05.948 20:58:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:05.948 20:58:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.948 20:58:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.948 20:58:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.948 20:58:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.948 20:58:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.948 20:58:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.948 20:58:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.948 20:58:19 -- accel/accel.sh@42 -- # jq -r . 00:07:05.948 [2024-07-13 20:58:19.540180] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:05.948 [2024-07-13 20:58:19.540301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59354 ] 00:07:05.948 [2024-07-13 20:58:19.692335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.948 [2024-07-13 20:58:19.857673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.854 20:58:21 -- accel/accel.sh@18 -- # out=' 00:07:07.854 SPDK Configuration: 00:07:07.854 Core mask: 0x1 00:07:07.854 00:07:07.854 Accel Perf Configuration: 00:07:07.854 Workload Type: dualcast 00:07:07.854 Transfer size: 4096 bytes 00:07:07.854 Vector count 1 00:07:07.854 Module: software 00:07:07.854 Queue depth: 32 00:07:07.854 Allocate depth: 32 00:07:07.854 # threads/core: 1 00:07:07.854 Run time: 1 seconds 00:07:07.854 Verify: Yes 00:07:07.854 00:07:07.854 Running for 1 seconds... 00:07:07.854 00:07:07.854 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.854 ------------------------------------------------------------------------------------ 00:07:07.854 0,0 305184/s 1192 MiB/s 0 0 00:07:07.854 ==================================================================================== 00:07:07.854 Total 305184/s 1192 MiB/s 0 0' 00:07:07.854 20:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.854 20:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.854 20:58:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:07.854 20:58:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:07.854 20:58:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.854 20:58:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.854 20:58:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.854 20:58:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.854 20:58:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.854 20:58:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.854 20:58:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.854 20:58:21 -- accel/accel.sh@42 -- # jq -r . 00:07:07.854 [2024-07-13 20:58:21.738164] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:07.854 [2024-07-13 20:58:21.738319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59381 ] 00:07:08.114 [2024-07-13 20:58:21.907544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.373 [2024-07-13 20:58:22.066775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val= 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val= 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val=0x1 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val= 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val= 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val=dualcast 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val= 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val=software 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val=32 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val=32 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val=1 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val=Yes 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val= 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:08.373 20:58:22 -- accel/accel.sh@21 -- # val= 00:07:08.373 20:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:08.373 20:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.279 20:58:23 -- accel/accel.sh@21 -- # val= 00:07:10.279 20:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.279 20:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.279 20:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.279 20:58:23 -- accel/accel.sh@21 -- # val= 00:07:10.279 20:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.279 20:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.279 20:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.279 20:58:23 -- accel/accel.sh@21 -- # val= 00:07:10.279 20:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.279 20:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.279 20:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.279 20:58:23 -- accel/accel.sh@21 -- # val= 00:07:10.279 20:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.279 20:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.279 20:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.279 20:58:23 -- accel/accel.sh@21 -- # val= 00:07:10.279 20:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.279 20:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.279 20:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.279 20:58:23 -- accel/accel.sh@21 -- # val= 00:07:10.279 20:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.279 20:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:10.279 20:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.279 20:58:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.279 20:58:23 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:10.279 20:58:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.279 ************************************ 00:07:10.279 END TEST accel_dualcast 00:07:10.279 ************************************ 00:07:10.279 00:07:10.279 real 0m4.408s 00:07:10.279 user 0m3.945s 00:07:10.279 sys 0m0.255s 00:07:10.279 20:58:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.279 20:58:23 -- common/autotest_common.sh@10 -- # set +x 00:07:10.279 20:58:23 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:10.279 20:58:23 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:10.279 20:58:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.279 20:58:23 -- common/autotest_common.sh@10 -- # set +x 00:07:10.279 ************************************ 00:07:10.279 START TEST accel_compare 00:07:10.279 ************************************ 00:07:10.279 20:58:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:10.279 20:58:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.279 20:58:23 -- accel/accel.sh@17 -- # local accel_module 00:07:10.279 20:58:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:10.279 20:58:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:10.279 20:58:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.279 20:58:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.279 20:58:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.279 20:58:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.279 20:58:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.279 20:58:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.279 20:58:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.279 20:58:23 -- accel/accel.sh@42 -- # jq -r . 00:07:10.279 [2024-07-13 20:58:24.037988] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:10.279 [2024-07-13 20:58:24.038217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59422 ] 00:07:10.538 [2024-07-13 20:58:24.208828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.538 [2024-07-13 20:58:24.356399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.444 20:58:26 -- accel/accel.sh@18 -- # out=' 00:07:12.444 SPDK Configuration: 00:07:12.444 Core mask: 0x1 00:07:12.444 00:07:12.444 Accel Perf Configuration: 00:07:12.444 Workload Type: compare 00:07:12.444 Transfer size: 4096 bytes 00:07:12.444 Vector count 1 00:07:12.444 Module: software 00:07:12.444 Queue depth: 32 00:07:12.444 Allocate depth: 32 00:07:12.444 # threads/core: 1 00:07:12.444 Run time: 1 seconds 00:07:12.444 Verify: Yes 00:07:12.444 00:07:12.444 Running for 1 seconds... 00:07:12.444 00:07:12.444 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.444 ------------------------------------------------------------------------------------ 00:07:12.444 0,0 428800/s 1675 MiB/s 0 0 00:07:12.444 ==================================================================================== 00:07:12.444 Total 428800/s 1675 MiB/s 0 0' 00:07:12.444 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.444 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.444 20:58:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:12.444 20:58:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.444 20:58:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:12.444 20:58:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.444 20:58:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.444 20:58:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.444 20:58:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.444 20:58:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.444 20:58:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.444 20:58:26 -- accel/accel.sh@42 -- # jq -r . 00:07:12.444 [2024-07-13 20:58:26.237763] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:12.444 [2024-07-13 20:58:26.237989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59453 ] 00:07:12.703 [2024-07-13 20:58:26.410497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.703 [2024-07-13 20:58:26.584330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val= 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val= 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val=0x1 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val= 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val= 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val=compare 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val= 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val=software 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val=32 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val=32 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val=1 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val=Yes 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val= 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:12.962 20:58:26 -- accel/accel.sh@21 -- # val= 00:07:12.962 20:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:12.962 20:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.896 20:58:28 -- accel/accel.sh@21 -- # val= 00:07:14.896 20:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.896 20:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.896 20:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.896 20:58:28 -- accel/accel.sh@21 -- # val= 00:07:14.896 20:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.896 20:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.896 20:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.896 20:58:28 -- accel/accel.sh@21 -- # val= 00:07:14.896 20:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.896 20:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.896 20:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.896 20:58:28 -- accel/accel.sh@21 -- # val= 00:07:14.896 20:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.896 20:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.896 20:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.896 20:58:28 -- accel/accel.sh@21 -- # val= 00:07:14.896 20:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.897 20:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.897 20:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.897 20:58:28 -- accel/accel.sh@21 -- # val= 00:07:14.897 20:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.897 20:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.897 20:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.897 20:58:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.897 20:58:28 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:14.897 20:58:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.897 00:07:14.897 real 0m4.724s 00:07:14.897 user 0m4.234s 00:07:14.897 sys 0m0.283s 00:07:14.897 ************************************ 00:07:14.897 END TEST accel_compare 00:07:14.897 ************************************ 00:07:14.897 20:58:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.897 20:58:28 -- common/autotest_common.sh@10 -- # set +x 00:07:14.897 20:58:28 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:14.897 20:58:28 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:14.897 20:58:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.897 20:58:28 -- common/autotest_common.sh@10 -- # set +x 00:07:14.897 ************************************ 00:07:14.897 START TEST accel_xor 00:07:14.897 ************************************ 00:07:14.897 20:58:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:14.897 20:58:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.897 20:58:28 -- accel/accel.sh@17 -- # local accel_module 00:07:14.897 20:58:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:14.897 20:58:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:14.897 20:58:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.897 20:58:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.897 20:58:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.897 20:58:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.897 20:58:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.897 20:58:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.897 20:58:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.897 20:58:28 -- accel/accel.sh@42 -- # jq -r . 00:07:14.897 [2024-07-13 20:58:28.786156] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:14.897 [2024-07-13 20:58:28.786343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59500 ] 00:07:15.156 [2024-07-13 20:58:28.950631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.415 [2024-07-13 20:58:29.112474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.319 20:58:30 -- accel/accel.sh@18 -- # out=' 00:07:17.319 SPDK Configuration: 00:07:17.319 Core mask: 0x1 00:07:17.319 00:07:17.319 Accel Perf Configuration: 00:07:17.319 Workload Type: xor 00:07:17.319 Source buffers: 2 00:07:17.319 Transfer size: 4096 bytes 00:07:17.319 Vector count 1 00:07:17.319 Module: software 00:07:17.319 Queue depth: 32 00:07:17.319 Allocate depth: 32 00:07:17.319 # threads/core: 1 00:07:17.319 Run time: 1 seconds 00:07:17.319 Verify: Yes 00:07:17.319 00:07:17.319 Running for 1 seconds... 00:07:17.319 00:07:17.319 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.319 ------------------------------------------------------------------------------------ 00:07:17.319 0,0 220960/s 863 MiB/s 0 0 00:07:17.319 ==================================================================================== 00:07:17.319 Total 220960/s 863 MiB/s 0 0' 00:07:17.319 20:58:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.319 20:58:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.319 20:58:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:17.319 20:58:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:17.319 20:58:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.319 20:58:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.319 20:58:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.319 20:58:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.319 20:58:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.319 20:58:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.319 20:58:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.319 20:58:30 -- accel/accel.sh@42 -- # jq -r . 00:07:17.319 [2024-07-13 20:58:31.037934] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:17.319 [2024-07-13 20:58:31.038107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59526 ] 00:07:17.319 [2024-07-13 20:58:31.205813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.579 [2024-07-13 20:58:31.352240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val= 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val= 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val=0x1 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val= 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val= 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val=xor 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val=2 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val= 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val=software 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val=32 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val=32 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val=1 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val=Yes 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val= 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.838 20:58:31 -- accel/accel.sh@21 -- # val= 00:07:17.838 20:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.838 20:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.741 20:58:33 -- accel/accel.sh@21 -- # val= 00:07:19.741 20:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.741 20:58:33 -- accel/accel.sh@20 -- # IFS=: 00:07:19.741 20:58:33 -- accel/accel.sh@20 -- # read -r var val 00:07:19.741 20:58:33 -- accel/accel.sh@21 -- # val= 00:07:19.741 20:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.741 20:58:33 -- accel/accel.sh@20 -- # IFS=: 00:07:19.741 20:58:33 -- accel/accel.sh@20 -- # read -r var val 00:07:19.741 20:58:33 -- accel/accel.sh@21 -- # val= 00:07:19.741 20:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.741 20:58:33 -- accel/accel.sh@20 -- # IFS=: 00:07:19.741 20:58:33 -- accel/accel.sh@20 -- # read -r var val 00:07:19.741 20:58:33 -- accel/accel.sh@21 -- # val= 00:07:19.741 20:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.741 20:58:33 -- accel/accel.sh@20 -- # IFS=: 00:07:19.741 20:58:33 -- accel/accel.sh@20 -- # read -r var val 00:07:19.741 20:58:33 -- accel/accel.sh@21 -- # val= 00:07:19.741 20:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.741 20:58:33 -- accel/accel.sh@20 -- # IFS=: 00:07:19.741 20:58:33 -- accel/accel.sh@20 -- # read -r var val 00:07:19.741 20:58:33 -- accel/accel.sh@21 -- # val= 00:07:19.741 20:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.741 20:58:33 -- accel/accel.sh@20 -- # IFS=: 00:07:19.741 20:58:33 -- accel/accel.sh@20 -- # read -r var val 00:07:19.741 20:58:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.741 20:58:33 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:19.741 20:58:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.741 00:07:19.741 real 0m4.504s 00:07:19.741 user 0m4.018s 00:07:19.741 sys 0m0.280s 00:07:19.741 20:58:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.741 20:58:33 -- common/autotest_common.sh@10 -- # set +x 00:07:19.741 ************************************ 00:07:19.741 END TEST accel_xor 00:07:19.741 ************************************ 00:07:19.741 20:58:33 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:19.741 20:58:33 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:19.741 20:58:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.741 20:58:33 -- common/autotest_common.sh@10 -- # set +x 00:07:19.741 ************************************ 00:07:19.741 START TEST accel_xor 00:07:19.741 ************************************ 00:07:19.741 20:58:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:19.741 20:58:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.741 20:58:33 -- accel/accel.sh@17 -- # local accel_module 00:07:19.741 20:58:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:19.741 20:58:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:19.741 20:58:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.741 20:58:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.741 20:58:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.741 20:58:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.741 20:58:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.741 20:58:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.741 20:58:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.741 20:58:33 -- accel/accel.sh@42 -- # jq -r . 00:07:19.741 [2024-07-13 20:58:33.345824] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:19.741 [2024-07-13 20:58:33.346014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59567 ] 00:07:19.741 [2024-07-13 20:58:33.513629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.999 [2024-07-13 20:58:33.669437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.899 20:58:35 -- accel/accel.sh@18 -- # out=' 00:07:21.899 SPDK Configuration: 00:07:21.899 Core mask: 0x1 00:07:21.899 00:07:21.899 Accel Perf Configuration: 00:07:21.899 Workload Type: xor 00:07:21.899 Source buffers: 3 00:07:21.899 Transfer size: 4096 bytes 00:07:21.899 Vector count 1 00:07:21.899 Module: software 00:07:21.899 Queue depth: 32 00:07:21.899 Allocate depth: 32 00:07:21.899 # threads/core: 1 00:07:21.899 Run time: 1 seconds 00:07:21.899 Verify: Yes 00:07:21.899 00:07:21.899 Running for 1 seconds... 00:07:21.899 00:07:21.899 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.899 ------------------------------------------------------------------------------------ 00:07:21.899 0,0 216000/s 843 MiB/s 0 0 00:07:21.899 ==================================================================================== 00:07:21.899 Total 216000/s 843 MiB/s 0 0' 00:07:21.899 20:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.899 20:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.899 20:58:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:21.899 20:58:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:21.899 20:58:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.899 20:58:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.899 20:58:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.899 20:58:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.899 20:58:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.899 20:58:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.899 20:58:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.899 20:58:35 -- accel/accel.sh@42 -- # jq -r . 00:07:21.900 [2024-07-13 20:58:35.574680] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:21.900 [2024-07-13 20:58:35.574891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59593 ] 00:07:21.900 [2024-07-13 20:58:35.742396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.161 [2024-07-13 20:58:35.902346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val= 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val= 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val=0x1 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val= 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val= 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val=xor 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val=3 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val= 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val=software 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val=32 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val=32 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val=1 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val=Yes 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val= 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:22.161 20:58:36 -- accel/accel.sh@21 -- # val= 00:07:22.161 20:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:22.161 20:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:24.066 20:58:37 -- accel/accel.sh@21 -- # val= 00:07:24.066 20:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.066 20:58:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.066 20:58:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.066 20:58:37 -- accel/accel.sh@21 -- # val= 00:07:24.066 20:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.066 20:58:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.066 20:58:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.066 20:58:37 -- accel/accel.sh@21 -- # val= 00:07:24.066 20:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.066 20:58:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.066 20:58:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.066 20:58:37 -- accel/accel.sh@21 -- # val= 00:07:24.066 20:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.066 20:58:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.066 20:58:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.066 20:58:37 -- accel/accel.sh@21 -- # val= 00:07:24.066 20:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.066 20:58:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.066 20:58:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.066 20:58:37 -- accel/accel.sh@21 -- # val= 00:07:24.066 20:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.066 20:58:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.066 20:58:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.066 20:58:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.066 20:58:37 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:24.066 20:58:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.066 00:07:24.066 real 0m4.429s 00:07:24.066 user 0m3.928s 00:07:24.066 sys 0m0.295s 00:07:24.066 20:58:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.066 ************************************ 00:07:24.066 END TEST accel_xor 00:07:24.066 ************************************ 00:07:24.066 20:58:37 -- common/autotest_common.sh@10 -- # set +x 00:07:24.066 20:58:37 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:24.066 20:58:37 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:24.066 20:58:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.066 20:58:37 -- common/autotest_common.sh@10 -- # set +x 00:07:24.066 ************************************ 00:07:24.066 START TEST accel_dif_verify 00:07:24.066 ************************************ 00:07:24.066 20:58:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:24.066 20:58:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.066 20:58:37 -- accel/accel.sh@17 -- # local accel_module 00:07:24.066 20:58:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:24.066 20:58:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:24.066 20:58:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.066 20:58:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.066 20:58:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.066 20:58:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.066 20:58:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.066 20:58:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.066 20:58:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.066 20:58:37 -- accel/accel.sh@42 -- # jq -r . 00:07:24.066 [2024-07-13 20:58:37.822423] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:24.066 [2024-07-13 20:58:37.822606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59634 ] 00:07:24.325 [2024-07-13 20:58:38.014297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.325 [2024-07-13 20:58:38.168194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.231 20:58:40 -- accel/accel.sh@18 -- # out=' 00:07:26.231 SPDK Configuration: 00:07:26.231 Core mask: 0x1 00:07:26.231 00:07:26.231 Accel Perf Configuration: 00:07:26.231 Workload Type: dif_verify 00:07:26.231 Vector size: 4096 bytes 00:07:26.231 Transfer size: 4096 bytes 00:07:26.231 Block size: 512 bytes 00:07:26.231 Metadata size: 8 bytes 00:07:26.231 Vector count 1 00:07:26.231 Module: software 00:07:26.231 Queue depth: 32 00:07:26.231 Allocate depth: 32 00:07:26.231 # threads/core: 1 00:07:26.231 Run time: 1 seconds 00:07:26.231 Verify: No 00:07:26.231 00:07:26.231 Running for 1 seconds... 00:07:26.231 00:07:26.231 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.231 ------------------------------------------------------------------------------------ 00:07:26.231 0,0 106496/s 422 MiB/s 0 0 00:07:26.231 ==================================================================================== 00:07:26.231 Total 106496/s 416 MiB/s 0 0' 00:07:26.231 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.231 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.231 20:58:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:26.231 20:58:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:26.231 20:58:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.231 20:58:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.231 20:58:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.231 20:58:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.231 20:58:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.231 20:58:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.231 20:58:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.231 20:58:40 -- accel/accel.sh@42 -- # jq -r . 00:07:26.231 [2024-07-13 20:58:40.083213] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:26.231 [2024-07-13 20:58:40.083362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59670 ] 00:07:26.490 [2024-07-13 20:58:40.257271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.750 [2024-07-13 20:58:40.452110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val= 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val= 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val=0x1 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val= 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val= 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val=dif_verify 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val= 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val=software 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val=32 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val=32 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val=1 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val=No 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val= 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:26.750 20:58:40 -- accel/accel.sh@21 -- # val= 00:07:26.750 20:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # IFS=: 00:07:26.750 20:58:40 -- accel/accel.sh@20 -- # read -r var val 00:07:28.657 20:58:42 -- accel/accel.sh@21 -- # val= 00:07:28.657 20:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.657 20:58:42 -- accel/accel.sh@20 -- # IFS=: 00:07:28.657 20:58:42 -- accel/accel.sh@20 -- # read -r var val 00:07:28.657 20:58:42 -- accel/accel.sh@21 -- # val= 00:07:28.657 20:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.657 20:58:42 -- accel/accel.sh@20 -- # IFS=: 00:07:28.657 20:58:42 -- accel/accel.sh@20 -- # read -r var val 00:07:28.657 20:58:42 -- accel/accel.sh@21 -- # val= 00:07:28.657 20:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.657 20:58:42 -- accel/accel.sh@20 -- # IFS=: 00:07:28.657 20:58:42 -- accel/accel.sh@20 -- # read -r var val 00:07:28.657 20:58:42 -- accel/accel.sh@21 -- # val= 00:07:28.657 20:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.657 20:58:42 -- accel/accel.sh@20 -- # IFS=: 00:07:28.657 20:58:42 -- accel/accel.sh@20 -- # read -r var val 00:07:28.657 20:58:42 -- accel/accel.sh@21 -- # val= 00:07:28.657 20:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.657 20:58:42 -- accel/accel.sh@20 -- # IFS=: 00:07:28.657 20:58:42 -- accel/accel.sh@20 -- # read -r var val 00:07:28.657 20:58:42 -- accel/accel.sh@21 -- # val= 00:07:28.657 20:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.657 20:58:42 -- accel/accel.sh@20 -- # IFS=: 00:07:28.657 20:58:42 -- accel/accel.sh@20 -- # read -r var val 00:07:28.657 ************************************ 00:07:28.657 END TEST accel_dif_verify 00:07:28.657 ************************************ 00:07:28.657 20:58:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.657 20:58:42 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:28.657 20:58:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.657 00:07:28.657 real 0m4.521s 00:07:28.657 user 0m4.015s 00:07:28.657 sys 0m0.297s 00:07:28.657 20:58:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.657 20:58:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.657 20:58:42 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:28.657 20:58:42 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:28.657 20:58:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.657 20:58:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.657 ************************************ 00:07:28.657 START TEST accel_dif_generate 00:07:28.657 ************************************ 00:07:28.657 20:58:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:28.657 20:58:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.657 20:58:42 -- accel/accel.sh@17 -- # local accel_module 00:07:28.657 20:58:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:28.657 20:58:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:28.657 20:58:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.657 20:58:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.657 20:58:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.657 20:58:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.657 20:58:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.657 20:58:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.657 20:58:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.657 20:58:42 -- accel/accel.sh@42 -- # jq -r . 00:07:28.657 [2024-07-13 20:58:42.392728] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:28.657 [2024-07-13 20:58:42.392904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59712 ] 00:07:28.657 [2024-07-13 20:58:42.561687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.916 [2024-07-13 20:58:42.710105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.853 20:58:44 -- accel/accel.sh@18 -- # out=' 00:07:30.853 SPDK Configuration: 00:07:30.853 Core mask: 0x1 00:07:30.853 00:07:30.853 Accel Perf Configuration: 00:07:30.853 Workload Type: dif_generate 00:07:30.853 Vector size: 4096 bytes 00:07:30.853 Transfer size: 4096 bytes 00:07:30.853 Block size: 512 bytes 00:07:30.853 Metadata size: 8 bytes 00:07:30.853 Vector count 1 00:07:30.853 Module: software 00:07:30.853 Queue depth: 32 00:07:30.853 Allocate depth: 32 00:07:30.853 # threads/core: 1 00:07:30.853 Run time: 1 seconds 00:07:30.853 Verify: No 00:07:30.853 00:07:30.853 Running for 1 seconds... 00:07:30.853 00:07:30.853 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.853 ------------------------------------------------------------------------------------ 00:07:30.853 0,0 128064/s 508 MiB/s 0 0 00:07:30.853 ==================================================================================== 00:07:30.853 Total 128064/s 500 MiB/s 0 0' 00:07:30.853 20:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.853 20:58:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:30.853 20:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.853 20:58:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:30.853 20:58:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.853 20:58:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.853 20:58:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.853 20:58:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.853 20:58:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.853 20:58:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.853 20:58:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.853 20:58:44 -- accel/accel.sh@42 -- # jq -r . 00:07:30.853 [2024-07-13 20:58:44.604277] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:30.853 [2024-07-13 20:58:44.604487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59738 ] 00:07:31.112 [2024-07-13 20:58:44.781592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.112 [2024-07-13 20:58:44.956225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.372 20:58:45 -- accel/accel.sh@21 -- # val= 00:07:31.372 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.372 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.372 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.372 20:58:45 -- accel/accel.sh@21 -- # val= 00:07:31.372 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.372 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.372 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.372 20:58:45 -- accel/accel.sh@21 -- # val=0x1 00:07:31.372 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.372 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.372 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.372 20:58:45 -- accel/accel.sh@21 -- # val= 00:07:31.372 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.372 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.372 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.372 20:58:45 -- accel/accel.sh@21 -- # val= 00:07:31.372 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.372 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.372 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.372 20:58:45 -- accel/accel.sh@21 -- # val=dif_generate 00:07:31.372 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val= 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val=software 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val=32 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val=32 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val=1 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val=No 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val= 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:31.373 20:58:45 -- accel/accel.sh@21 -- # val= 00:07:31.373 20:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:31.373 20:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:33.278 20:58:46 -- accel/accel.sh@21 -- # val= 00:07:33.278 20:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.278 20:58:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.278 20:58:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.278 20:58:46 -- accel/accel.sh@21 -- # val= 00:07:33.278 20:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.278 20:58:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.278 20:58:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.278 20:58:46 -- accel/accel.sh@21 -- # val= 00:07:33.278 20:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.278 20:58:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.278 20:58:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.278 20:58:46 -- accel/accel.sh@21 -- # val= 00:07:33.278 20:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.278 20:58:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.278 20:58:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.278 20:58:46 -- accel/accel.sh@21 -- # val= 00:07:33.278 20:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.278 20:58:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.278 20:58:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.278 20:58:46 -- accel/accel.sh@21 -- # val= 00:07:33.278 20:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.278 20:58:46 -- accel/accel.sh@20 -- # IFS=: 00:07:33.278 20:58:46 -- accel/accel.sh@20 -- # read -r var val 00:07:33.278 ************************************ 00:07:33.278 END TEST accel_dif_generate 00:07:33.278 ************************************ 00:07:33.278 20:58:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.278 20:58:46 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:33.278 20:58:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.278 00:07:33.278 real 0m4.542s 00:07:33.278 user 0m4.055s 00:07:33.278 sys 0m0.272s 00:07:33.278 20:58:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.278 20:58:46 -- common/autotest_common.sh@10 -- # set +x 00:07:33.278 20:58:46 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:33.278 20:58:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:33.278 20:58:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.278 20:58:46 -- common/autotest_common.sh@10 -- # set +x 00:07:33.278 ************************************ 00:07:33.278 START TEST accel_dif_generate_copy 00:07:33.278 ************************************ 00:07:33.278 20:58:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:33.278 20:58:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.278 20:58:46 -- accel/accel.sh@17 -- # local accel_module 00:07:33.278 20:58:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:33.278 20:58:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:33.278 20:58:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.278 20:58:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.278 20:58:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.278 20:58:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.278 20:58:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.278 20:58:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.278 20:58:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.278 20:58:46 -- accel/accel.sh@42 -- # jq -r . 00:07:33.278 [2024-07-13 20:58:46.981979] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:33.278 [2024-07-13 20:58:46.982149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59779 ] 00:07:33.278 [2024-07-13 20:58:47.150789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.537 [2024-07-13 20:58:47.325937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.443 20:58:49 -- accel/accel.sh@18 -- # out=' 00:07:35.443 SPDK Configuration: 00:07:35.443 Core mask: 0x1 00:07:35.443 00:07:35.443 Accel Perf Configuration: 00:07:35.443 Workload Type: dif_generate_copy 00:07:35.443 Vector size: 4096 bytes 00:07:35.443 Transfer size: 4096 bytes 00:07:35.443 Vector count 1 00:07:35.443 Module: software 00:07:35.443 Queue depth: 32 00:07:35.443 Allocate depth: 32 00:07:35.443 # threads/core: 1 00:07:35.443 Run time: 1 seconds 00:07:35.443 Verify: No 00:07:35.443 00:07:35.443 Running for 1 seconds... 00:07:35.443 00:07:35.443 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.443 ------------------------------------------------------------------------------------ 00:07:35.443 0,0 89696/s 355 MiB/s 0 0 00:07:35.443 ==================================================================================== 00:07:35.443 Total 89696/s 350 MiB/s 0 0' 00:07:35.443 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.443 20:58:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:35.443 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.443 20:58:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:35.443 20:58:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.443 20:58:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.443 20:58:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.443 20:58:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.443 20:58:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.443 20:58:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.443 20:58:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.443 20:58:49 -- accel/accel.sh@42 -- # jq -r . 00:07:35.443 [2024-07-13 20:58:49.308145] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:35.443 [2024-07-13 20:58:49.308302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59811 ] 00:07:35.702 [2024-07-13 20:58:49.477857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.961 [2024-07-13 20:58:49.643807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.961 20:58:49 -- accel/accel.sh@21 -- # val= 00:07:35.961 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.961 20:58:49 -- accel/accel.sh@21 -- # val= 00:07:35.961 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.961 20:58:49 -- accel/accel.sh@21 -- # val=0x1 00:07:35.961 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.961 20:58:49 -- accel/accel.sh@21 -- # val= 00:07:35.961 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.961 20:58:49 -- accel/accel.sh@21 -- # val= 00:07:35.961 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.961 20:58:49 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:35.961 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.961 20:58:49 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.961 20:58:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.961 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.961 20:58:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.961 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.961 20:58:49 -- accel/accel.sh@21 -- # val= 00:07:35.961 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.961 20:58:49 -- accel/accel.sh@21 -- # val=software 00:07:35.961 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.961 20:58:49 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.961 20:58:49 -- accel/accel.sh@21 -- # val=32 00:07:35.961 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.961 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.962 20:58:49 -- accel/accel.sh@21 -- # val=32 00:07:35.962 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.962 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.962 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.962 20:58:49 -- accel/accel.sh@21 -- # val=1 00:07:35.962 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.962 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.962 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.962 20:58:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.962 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.962 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.962 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.962 20:58:49 -- accel/accel.sh@21 -- # val=No 00:07:35.962 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.962 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.962 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.962 20:58:49 -- accel/accel.sh@21 -- # val= 00:07:35.962 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.962 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.962 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.962 20:58:49 -- accel/accel.sh@21 -- # val= 00:07:35.962 20:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.962 20:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:35.962 20:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:37.867 20:58:51 -- accel/accel.sh@21 -- # val= 00:07:37.867 20:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.867 20:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:37.867 20:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:37.867 20:58:51 -- accel/accel.sh@21 -- # val= 00:07:37.867 20:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.867 20:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:37.867 20:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:37.867 20:58:51 -- accel/accel.sh@21 -- # val= 00:07:37.867 20:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.867 20:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:37.867 20:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:37.867 20:58:51 -- accel/accel.sh@21 -- # val= 00:07:37.867 20:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.867 20:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:37.867 20:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:37.867 20:58:51 -- accel/accel.sh@21 -- # val= 00:07:37.867 20:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.867 20:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:37.867 20:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:37.867 20:58:51 -- accel/accel.sh@21 -- # val= 00:07:37.867 20:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.867 20:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:37.867 20:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:37.867 ************************************ 00:07:37.867 END TEST accel_dif_generate_copy 00:07:37.867 ************************************ 00:07:37.867 20:58:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.867 20:58:51 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:37.867 20:58:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.867 00:07:37.867 real 0m4.581s 00:07:37.867 user 0m4.084s 00:07:37.867 sys 0m0.284s 00:07:37.867 20:58:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.867 20:58:51 -- common/autotest_common.sh@10 -- # set +x 00:07:37.867 20:58:51 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:37.867 20:58:51 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.867 20:58:51 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:37.867 20:58:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.867 20:58:51 -- common/autotest_common.sh@10 -- # set +x 00:07:37.867 ************************************ 00:07:37.867 START TEST accel_comp 00:07:37.867 ************************************ 00:07:37.867 20:58:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.867 20:58:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.867 20:58:51 -- accel/accel.sh@17 -- # local accel_module 00:07:37.867 20:58:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.867 20:58:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.867 20:58:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.867 20:58:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.867 20:58:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.867 20:58:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.867 20:58:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.867 20:58:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.867 20:58:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.867 20:58:51 -- accel/accel.sh@42 -- # jq -r . 00:07:37.867 [2024-07-13 20:58:51.617814] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:37.867 [2024-07-13 20:58:51.617985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59852 ] 00:07:38.126 [2024-07-13 20:58:51.792254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.126 [2024-07-13 20:58:51.975350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.029 20:58:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:40.029 00:07:40.029 SPDK Configuration: 00:07:40.029 Core mask: 0x1 00:07:40.029 00:07:40.029 Accel Perf Configuration: 00:07:40.029 Workload Type: compress 00:07:40.029 Transfer size: 4096 bytes 00:07:40.029 Vector count 1 00:07:40.029 Module: software 00:07:40.029 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.029 Queue depth: 32 00:07:40.029 Allocate depth: 32 00:07:40.029 # threads/core: 1 00:07:40.029 Run time: 1 seconds 00:07:40.029 Verify: No 00:07:40.029 00:07:40.029 Running for 1 seconds... 00:07:40.029 00:07:40.029 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.029 ------------------------------------------------------------------------------------ 00:07:40.029 0,0 49216/s 205 MiB/s 0 0 00:07:40.029 ==================================================================================== 00:07:40.029 Total 49216/s 192 MiB/s 0 0' 00:07:40.029 20:58:53 -- accel/accel.sh@20 -- # IFS=: 00:07:40.029 20:58:53 -- accel/accel.sh@20 -- # read -r var val 00:07:40.029 20:58:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.029 20:58:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.029 20:58:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.029 20:58:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.288 20:58:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.288 20:58:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.288 20:58:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.288 20:58:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.288 20:58:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.288 20:58:53 -- accel/accel.sh@42 -- # jq -r . 00:07:40.288 [2024-07-13 20:58:54.000802] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:40.288 [2024-07-13 20:58:54.000993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59883 ] 00:07:40.288 [2024-07-13 20:58:54.172457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.547 [2024-07-13 20:58:54.348832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.805 20:58:54 -- accel/accel.sh@21 -- # val= 00:07:40.805 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.805 20:58:54 -- accel/accel.sh@21 -- # val= 00:07:40.805 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.805 20:58:54 -- accel/accel.sh@21 -- # val= 00:07:40.805 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.805 20:58:54 -- accel/accel.sh@21 -- # val=0x1 00:07:40.805 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.805 20:58:54 -- accel/accel.sh@21 -- # val= 00:07:40.805 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.805 20:58:54 -- accel/accel.sh@21 -- # val= 00:07:40.805 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.805 20:58:54 -- accel/accel.sh@21 -- # val=compress 00:07:40.805 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.805 20:58:54 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.805 20:58:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:40.805 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.805 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.805 20:58:54 -- accel/accel.sh@21 -- # val= 00:07:40.806 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.806 20:58:54 -- accel/accel.sh@21 -- # val=software 00:07:40.806 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.806 20:58:54 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.806 20:58:54 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.806 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.806 20:58:54 -- accel/accel.sh@21 -- # val=32 00:07:40.806 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.806 20:58:54 -- accel/accel.sh@21 -- # val=32 00:07:40.806 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.806 20:58:54 -- accel/accel.sh@21 -- # val=1 00:07:40.806 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.806 20:58:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.806 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.806 20:58:54 -- accel/accel.sh@21 -- # val=No 00:07:40.806 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.806 20:58:54 -- accel/accel.sh@21 -- # val= 00:07:40.806 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.806 20:58:54 -- accel/accel.sh@21 -- # val= 00:07:40.806 20:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.806 20:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:42.706 20:58:56 -- accel/accel.sh@21 -- # val= 00:07:42.706 20:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.706 20:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:42.706 20:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:42.706 20:58:56 -- accel/accel.sh@21 -- # val= 00:07:42.706 20:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.706 20:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:42.706 20:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:42.706 20:58:56 -- accel/accel.sh@21 -- # val= 00:07:42.706 20:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.706 20:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:42.706 20:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:42.706 20:58:56 -- accel/accel.sh@21 -- # val= 00:07:42.706 20:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.706 20:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:42.707 20:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:42.707 20:58:56 -- accel/accel.sh@21 -- # val= 00:07:42.707 20:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.707 20:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:42.707 20:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:42.707 20:58:56 -- accel/accel.sh@21 -- # val= 00:07:42.707 20:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.707 20:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:42.707 20:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:42.707 20:58:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:42.707 20:58:56 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:42.707 20:58:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.707 00:07:42.707 real 0m4.741s 00:07:42.707 user 0m4.197s 00:07:42.707 sys 0m0.331s 00:07:42.707 20:58:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.707 20:58:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.707 ************************************ 00:07:42.707 END TEST accel_comp 00:07:42.707 ************************************ 00:07:42.707 20:58:56 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:42.707 20:58:56 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:42.707 20:58:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.707 20:58:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.707 ************************************ 00:07:42.707 START TEST accel_decomp 00:07:42.707 ************************************ 00:07:42.707 20:58:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:42.707 20:58:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.707 20:58:56 -- accel/accel.sh@17 -- # local accel_module 00:07:42.707 20:58:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:42.707 20:58:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:42.707 20:58:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.707 20:58:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.707 20:58:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.707 20:58:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.707 20:58:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.707 20:58:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.707 20:58:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.707 20:58:56 -- accel/accel.sh@42 -- # jq -r . 00:07:42.707 [2024-07-13 20:58:56.412478] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:42.707 [2024-07-13 20:58:56.412660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59930 ] 00:07:42.707 [2024-07-13 20:58:56.584691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.964 [2024-07-13 20:58:56.798350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.875 20:58:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:44.875 00:07:44.875 SPDK Configuration: 00:07:44.875 Core mask: 0x1 00:07:44.875 00:07:44.875 Accel Perf Configuration: 00:07:44.875 Workload Type: decompress 00:07:44.875 Transfer size: 4096 bytes 00:07:44.875 Vector count 1 00:07:44.875 Module: software 00:07:44.875 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.875 Queue depth: 32 00:07:44.875 Allocate depth: 32 00:07:44.875 # threads/core: 1 00:07:44.875 Run time: 1 seconds 00:07:44.875 Verify: Yes 00:07:44.875 00:07:44.875 Running for 1 seconds... 00:07:44.875 00:07:44.875 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.875 ------------------------------------------------------------------------------------ 00:07:44.875 0,0 59264/s 109 MiB/s 0 0 00:07:44.875 ==================================================================================== 00:07:44.875 Total 59264/s 231 MiB/s 0 0' 00:07:44.875 20:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:44.875 20:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:44.875 20:58:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:44.875 20:58:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.875 20:58:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:44.875 20:58:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.875 20:58:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.875 20:58:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.875 20:58:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.875 20:58:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.875 20:58:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.875 20:58:58 -- accel/accel.sh@42 -- # jq -r . 00:07:44.875 [2024-07-13 20:58:58.765821] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:44.875 [2024-07-13 20:58:58.766006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59956 ] 00:07:45.148 [2024-07-13 20:58:58.934227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.406 [2024-07-13 20:58:59.107243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val= 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val= 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val= 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val=0x1 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val= 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val= 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val=decompress 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val= 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val=software 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val=32 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val=32 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val=1 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val=Yes 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val= 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:45.406 20:58:59 -- accel/accel.sh@21 -- # val= 00:07:45.406 20:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:45.406 20:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.309 20:59:01 -- accel/accel.sh@21 -- # val= 00:07:47.309 20:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.309 20:59:01 -- accel/accel.sh@20 -- # IFS=: 00:07:47.309 20:59:01 -- accel/accel.sh@20 -- # read -r var val 00:07:47.309 20:59:01 -- accel/accel.sh@21 -- # val= 00:07:47.309 20:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.309 20:59:01 -- accel/accel.sh@20 -- # IFS=: 00:07:47.309 20:59:01 -- accel/accel.sh@20 -- # read -r var val 00:07:47.309 20:59:01 -- accel/accel.sh@21 -- # val= 00:07:47.309 20:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.309 20:59:01 -- accel/accel.sh@20 -- # IFS=: 00:07:47.309 20:59:01 -- accel/accel.sh@20 -- # read -r var val 00:07:47.309 20:59:01 -- accel/accel.sh@21 -- # val= 00:07:47.309 20:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.309 20:59:01 -- accel/accel.sh@20 -- # IFS=: 00:07:47.309 20:59:01 -- accel/accel.sh@20 -- # read -r var val 00:07:47.309 20:59:01 -- accel/accel.sh@21 -- # val= 00:07:47.309 20:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.309 20:59:01 -- accel/accel.sh@20 -- # IFS=: 00:07:47.309 20:59:01 -- accel/accel.sh@20 -- # read -r var val 00:07:47.309 20:59:01 -- accel/accel.sh@21 -- # val= 00:07:47.309 20:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.309 20:59:01 -- accel/accel.sh@20 -- # IFS=: 00:07:47.309 20:59:01 -- accel/accel.sh@20 -- # read -r var val 00:07:47.309 ************************************ 00:07:47.309 END TEST accel_decomp 00:07:47.309 ************************************ 00:07:47.309 20:59:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.309 20:59:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:47.309 20:59:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.309 00:07:47.309 real 0m4.671s 00:07:47.309 user 0m4.159s 00:07:47.309 sys 0m0.299s 00:07:47.309 20:59:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.309 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:07:47.309 20:59:01 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:47.309 20:59:01 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:47.309 20:59:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.309 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:07:47.309 ************************************ 00:07:47.309 START TEST accel_decmop_full 00:07:47.309 ************************************ 00:07:47.309 20:59:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:47.309 20:59:01 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.309 20:59:01 -- accel/accel.sh@17 -- # local accel_module 00:07:47.309 20:59:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:47.309 20:59:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:47.309 20:59:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.309 20:59:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.309 20:59:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.309 20:59:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.309 20:59:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.309 20:59:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.309 20:59:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.309 20:59:01 -- accel/accel.sh@42 -- # jq -r . 00:07:47.309 [2024-07-13 20:59:01.127763] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:47.309 [2024-07-13 20:59:01.127974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59997 ] 00:07:47.567 [2024-07-13 20:59:01.296446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.567 [2024-07-13 20:59:01.469062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.101 20:59:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:50.101 00:07:50.101 SPDK Configuration: 00:07:50.101 Core mask: 0x1 00:07:50.101 00:07:50.101 Accel Perf Configuration: 00:07:50.101 Workload Type: decompress 00:07:50.101 Transfer size: 111250 bytes 00:07:50.101 Vector count 1 00:07:50.101 Module: software 00:07:50.101 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.101 Queue depth: 32 00:07:50.101 Allocate depth: 32 00:07:50.101 # threads/core: 1 00:07:50.101 Run time: 1 seconds 00:07:50.101 Verify: Yes 00:07:50.101 00:07:50.101 Running for 1 seconds... 00:07:50.101 00:07:50.101 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.101 ------------------------------------------------------------------------------------ 00:07:50.101 0,0 4576/s 189 MiB/s 0 0 00:07:50.101 ==================================================================================== 00:07:50.101 Total 4576/s 485 MiB/s 0 0' 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:50.101 20:59:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.101 20:59:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.101 20:59:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.101 20:59:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.101 20:59:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.101 20:59:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.101 20:59:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.101 20:59:03 -- accel/accel.sh@42 -- # jq -r . 00:07:50.101 [2024-07-13 20:59:03.464702] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:50.101 [2024-07-13 20:59:03.464826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60033 ] 00:07:50.101 [2024-07-13 20:59:03.623598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.101 [2024-07-13 20:59:03.788979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val= 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val= 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val= 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val=0x1 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val= 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val= 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val=decompress 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val= 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val=software 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val=32 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val=32 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val=1 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val=Yes 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val= 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:50.101 20:59:03 -- accel/accel.sh@21 -- # val= 00:07:50.101 20:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # IFS=: 00:07:50.101 20:59:03 -- accel/accel.sh@20 -- # read -r var val 00:07:52.032 20:59:05 -- accel/accel.sh@21 -- # val= 00:07:52.032 20:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.032 20:59:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.032 20:59:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.032 20:59:05 -- accel/accel.sh@21 -- # val= 00:07:52.032 20:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.032 20:59:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.032 20:59:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.032 20:59:05 -- accel/accel.sh@21 -- # val= 00:07:52.032 20:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.032 20:59:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.032 20:59:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.032 20:59:05 -- accel/accel.sh@21 -- # val= 00:07:52.032 20:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.032 20:59:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.032 20:59:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.032 20:59:05 -- accel/accel.sh@21 -- # val= 00:07:52.032 20:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.032 20:59:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.032 20:59:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.032 20:59:05 -- accel/accel.sh@21 -- # val= 00:07:52.032 20:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.032 20:59:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.032 20:59:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.032 20:59:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:52.032 20:59:05 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:52.032 ************************************ 00:07:52.032 END TEST accel_decmop_full 00:07:52.032 ************************************ 00:07:52.032 20:59:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.032 00:07:52.032 real 0m4.652s 00:07:52.032 user 0m4.173s 00:07:52.032 sys 0m0.272s 00:07:52.032 20:59:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.032 20:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:52.032 20:59:05 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:52.032 20:59:05 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:52.032 20:59:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.032 20:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:52.032 ************************************ 00:07:52.032 START TEST accel_decomp_mcore 00:07:52.032 ************************************ 00:07:52.032 20:59:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:52.032 20:59:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.032 20:59:05 -- accel/accel.sh@17 -- # local accel_module 00:07:52.032 20:59:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:52.032 20:59:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:52.032 20:59:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.032 20:59:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.032 20:59:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.032 20:59:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.032 20:59:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.032 20:59:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.032 20:59:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.032 20:59:05 -- accel/accel.sh@42 -- # jq -r . 00:07:52.032 [2024-07-13 20:59:05.826770] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:52.032 [2024-07-13 20:59:05.827449] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60075 ] 00:07:52.291 [2024-07-13 20:59:05.995723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.291 [2024-07-13 20:59:06.164604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.291 [2024-07-13 20:59:06.164751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.291 [2024-07-13 20:59:06.164925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.291 [2024-07-13 20:59:06.164929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.821 20:59:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:54.821 00:07:54.821 SPDK Configuration: 00:07:54.821 Core mask: 0xf 00:07:54.821 00:07:54.821 Accel Perf Configuration: 00:07:54.821 Workload Type: decompress 00:07:54.821 Transfer size: 4096 bytes 00:07:54.821 Vector count 1 00:07:54.821 Module: software 00:07:54.821 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.821 Queue depth: 32 00:07:54.821 Allocate depth: 32 00:07:54.821 # threads/core: 1 00:07:54.821 Run time: 1 seconds 00:07:54.821 Verify: Yes 00:07:54.821 00:07:54.821 Running for 1 seconds... 00:07:54.821 00:07:54.821 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:54.821 ------------------------------------------------------------------------------------ 00:07:54.821 0,0 53536/s 98 MiB/s 0 0 00:07:54.821 3,0 53728/s 99 MiB/s 0 0 00:07:54.822 2,0 53696/s 98 MiB/s 0 0 00:07:54.822 1,0 54208/s 99 MiB/s 0 0 00:07:54.822 ==================================================================================== 00:07:54.822 Total 215168/s 840 MiB/s 0 0' 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:54.822 20:59:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.822 20:59:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:54.822 20:59:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.822 20:59:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.822 20:59:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.822 20:59:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.822 20:59:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.822 20:59:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.822 20:59:08 -- accel/accel.sh@42 -- # jq -r . 00:07:54.822 [2024-07-13 20:59:08.202571] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:54.822 [2024-07-13 20:59:08.202772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60104 ] 00:07:54.822 [2024-07-13 20:59:08.370575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.822 [2024-07-13 20:59:08.542346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.822 [2024-07-13 20:59:08.542486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.822 [2024-07-13 20:59:08.542591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.822 [2024-07-13 20:59:08.542811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val= 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val= 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val= 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val=0xf 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val= 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val= 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val=decompress 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val= 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val=software 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val=32 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val=32 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val=1 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val=Yes 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val= 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:54.822 20:59:08 -- accel/accel.sh@21 -- # val= 00:07:54.822 20:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # IFS=: 00:07:54.822 20:59:08 -- accel/accel.sh@20 -- # read -r var val 00:07:56.724 20:59:10 -- accel/accel.sh@21 -- # val= 00:07:56.724 20:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:56.724 20:59:10 -- accel/accel.sh@21 -- # val= 00:07:56.724 20:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:56.724 20:59:10 -- accel/accel.sh@21 -- # val= 00:07:56.724 20:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:56.724 20:59:10 -- accel/accel.sh@21 -- # val= 00:07:56.724 20:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:56.724 20:59:10 -- accel/accel.sh@21 -- # val= 00:07:56.724 20:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:56.724 20:59:10 -- accel/accel.sh@21 -- # val= 00:07:56.724 20:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:56.724 20:59:10 -- accel/accel.sh@21 -- # val= 00:07:56.724 20:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:56.724 20:59:10 -- accel/accel.sh@21 -- # val= 00:07:56.724 20:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:56.724 20:59:10 -- accel/accel.sh@21 -- # val= 00:07:56.724 20:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:56.724 20:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:56.724 20:59:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:56.724 20:59:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:56.724 20:59:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.724 00:07:56.724 real 0m4.751s 00:07:56.724 user 0m14.054s 00:07:56.724 sys 0m0.337s 00:07:56.724 20:59:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.724 ************************************ 00:07:56.725 END TEST accel_decomp_mcore 00:07:56.725 ************************************ 00:07:56.725 20:59:10 -- common/autotest_common.sh@10 -- # set +x 00:07:56.725 20:59:10 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:56.725 20:59:10 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:56.725 20:59:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.725 20:59:10 -- common/autotest_common.sh@10 -- # set +x 00:07:56.725 ************************************ 00:07:56.725 START TEST accel_decomp_full_mcore 00:07:56.725 ************************************ 00:07:56.725 20:59:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:56.725 20:59:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:56.725 20:59:10 -- accel/accel.sh@17 -- # local accel_module 00:07:56.725 20:59:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:56.725 20:59:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:56.725 20:59:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.725 20:59:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.725 20:59:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.725 20:59:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.725 20:59:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.725 20:59:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.725 20:59:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.725 20:59:10 -- accel/accel.sh@42 -- # jq -r . 00:07:56.725 [2024-07-13 20:59:10.619343] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:56.725 [2024-07-13 20:59:10.619489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60154 ] 00:07:56.983 [2024-07-13 20:59:10.780343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.242 [2024-07-13 20:59:10.957289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.242 [2024-07-13 20:59:10.957423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.242 [2024-07-13 20:59:10.957881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.242 [2024-07-13 20:59:10.957930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.144 20:59:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:59.144 00:07:59.144 SPDK Configuration: 00:07:59.144 Core mask: 0xf 00:07:59.144 00:07:59.144 Accel Perf Configuration: 00:07:59.144 Workload Type: decompress 00:07:59.144 Transfer size: 111250 bytes 00:07:59.144 Vector count 1 00:07:59.144 Module: software 00:07:59.144 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:59.144 Queue depth: 32 00:07:59.144 Allocate depth: 32 00:07:59.144 # threads/core: 1 00:07:59.144 Run time: 1 seconds 00:07:59.144 Verify: Yes 00:07:59.144 00:07:59.144 Running for 1 seconds... 00:07:59.144 00:07:59.144 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:59.144 ------------------------------------------------------------------------------------ 00:07:59.144 0,0 4320/s 178 MiB/s 0 0 00:07:59.144 3,0 4320/s 178 MiB/s 0 0 00:07:59.144 2,0 4288/s 177 MiB/s 0 0 00:07:59.144 1,0 4352/s 179 MiB/s 0 0 00:07:59.144 ==================================================================================== 00:07:59.144 Total 17280/s 1833 MiB/s 0 0' 00:07:59.144 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.144 20:59:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:59.144 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.144 20:59:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.144 20:59:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:59.144 20:59:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.144 20:59:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.144 20:59:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.144 20:59:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.144 20:59:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.144 20:59:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.144 20:59:13 -- accel/accel.sh@42 -- # jq -r . 00:07:59.144 [2024-07-13 20:59:13.053243] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:59.144 [2024-07-13 20:59:13.053474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60188 ] 00:07:59.404 [2024-07-13 20:59:13.215266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.662 [2024-07-13 20:59:13.374283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.662 [2024-07-13 20:59:13.374371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.662 [2024-07-13 20:59:13.374478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.662 [2024-07-13 20:59:13.374771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val= 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val= 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val= 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val=0xf 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val= 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val= 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val=decompress 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val= 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val=software 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val=32 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val=32 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val=1 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val=Yes 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val= 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:07:59.662 20:59:13 -- accel/accel.sh@21 -- # val= 00:07:59.662 20:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # IFS=: 00:07:59.662 20:59:13 -- accel/accel.sh@20 -- # read -r var val 00:08:01.566 20:59:15 -- accel/accel.sh@21 -- # val= 00:08:01.566 20:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # IFS=: 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # read -r var val 00:08:01.566 20:59:15 -- accel/accel.sh@21 -- # val= 00:08:01.566 20:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # IFS=: 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # read -r var val 00:08:01.566 20:59:15 -- accel/accel.sh@21 -- # val= 00:08:01.566 20:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # IFS=: 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # read -r var val 00:08:01.566 20:59:15 -- accel/accel.sh@21 -- # val= 00:08:01.566 20:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # IFS=: 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # read -r var val 00:08:01.566 20:59:15 -- accel/accel.sh@21 -- # val= 00:08:01.566 20:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # IFS=: 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # read -r var val 00:08:01.566 20:59:15 -- accel/accel.sh@21 -- # val= 00:08:01.566 20:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # IFS=: 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # read -r var val 00:08:01.566 20:59:15 -- accel/accel.sh@21 -- # val= 00:08:01.566 20:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # IFS=: 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # read -r var val 00:08:01.566 20:59:15 -- accel/accel.sh@21 -- # val= 00:08:01.566 20:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # IFS=: 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # read -r var val 00:08:01.566 20:59:15 -- accel/accel.sh@21 -- # val= 00:08:01.566 20:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # IFS=: 00:08:01.566 20:59:15 -- accel/accel.sh@20 -- # read -r var val 00:08:01.566 20:59:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:01.566 20:59:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:01.566 20:59:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.566 00:08:01.566 real 0m4.774s 00:08:01.566 user 0m14.168s 00:08:01.566 sys 0m0.323s 00:08:01.566 20:59:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.566 20:59:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.566 ************************************ 00:08:01.566 END TEST accel_decomp_full_mcore 00:08:01.566 ************************************ 00:08:01.566 20:59:15 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:01.566 20:59:15 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:01.566 20:59:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.566 20:59:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.566 ************************************ 00:08:01.566 START TEST accel_decomp_mthread 00:08:01.566 ************************************ 00:08:01.566 20:59:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:01.566 20:59:15 -- accel/accel.sh@16 -- # local accel_opc 00:08:01.566 20:59:15 -- accel/accel.sh@17 -- # local accel_module 00:08:01.566 20:59:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:01.566 20:59:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:01.566 20:59:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.566 20:59:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.566 20:59:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.566 20:59:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.566 20:59:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.566 20:59:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.566 20:59:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.566 20:59:15 -- accel/accel.sh@42 -- # jq -r . 00:08:01.566 [2024-07-13 20:59:15.454316] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:01.566 [2024-07-13 20:59:15.454469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60233 ] 00:08:01.826 [2024-07-13 20:59:15.621544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.085 [2024-07-13 20:59:15.782745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.013 20:59:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:04.013 00:08:04.013 SPDK Configuration: 00:08:04.013 Core mask: 0x1 00:08:04.013 00:08:04.013 Accel Perf Configuration: 00:08:04.013 Workload Type: decompress 00:08:04.013 Transfer size: 4096 bytes 00:08:04.013 Vector count 1 00:08:04.013 Module: software 00:08:04.013 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:04.013 Queue depth: 32 00:08:04.013 Allocate depth: 32 00:08:04.013 # threads/core: 2 00:08:04.013 Run time: 1 seconds 00:08:04.013 Verify: Yes 00:08:04.013 00:08:04.013 Running for 1 seconds... 00:08:04.013 00:08:04.013 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:04.013 ------------------------------------------------------------------------------------ 00:08:04.013 0,1 34112/s 62 MiB/s 0 0 00:08:04.013 0,0 33984/s 62 MiB/s 0 0 00:08:04.013 ==================================================================================== 00:08:04.013 Total 68096/s 266 MiB/s 0 0' 00:08:04.013 20:59:17 -- accel/accel.sh@20 -- # IFS=: 00:08:04.013 20:59:17 -- accel/accel.sh@20 -- # read -r var val 00:08:04.013 20:59:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:04.013 20:59:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:04.013 20:59:17 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.013 20:59:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.013 20:59:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.013 20:59:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.013 20:59:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.013 20:59:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.013 20:59:17 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.013 20:59:17 -- accel/accel.sh@42 -- # jq -r . 00:08:04.013 [2024-07-13 20:59:17.717140] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:04.013 [2024-07-13 20:59:17.717294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60265 ] 00:08:04.013 [2024-07-13 20:59:17.884334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.271 [2024-07-13 20:59:18.041082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val= 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val= 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val= 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val=0x1 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val= 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val= 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val=decompress 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val= 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val=software 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@23 -- # accel_module=software 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val=32 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val=32 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val=2 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val=Yes 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val= 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:04.530 20:59:18 -- accel/accel.sh@21 -- # val= 00:08:04.530 20:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # IFS=: 00:08:04.530 20:59:18 -- accel/accel.sh@20 -- # read -r var val 00:08:06.432 20:59:20 -- accel/accel.sh@21 -- # val= 00:08:06.432 20:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # IFS=: 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # read -r var val 00:08:06.432 20:59:20 -- accel/accel.sh@21 -- # val= 00:08:06.432 20:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # IFS=: 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # read -r var val 00:08:06.432 20:59:20 -- accel/accel.sh@21 -- # val= 00:08:06.432 20:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # IFS=: 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # read -r var val 00:08:06.432 20:59:20 -- accel/accel.sh@21 -- # val= 00:08:06.432 20:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # IFS=: 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # read -r var val 00:08:06.432 20:59:20 -- accel/accel.sh@21 -- # val= 00:08:06.432 20:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # IFS=: 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # read -r var val 00:08:06.432 20:59:20 -- accel/accel.sh@21 -- # val= 00:08:06.432 20:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # IFS=: 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # read -r var val 00:08:06.432 20:59:20 -- accel/accel.sh@21 -- # val= 00:08:06.432 20:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # IFS=: 00:08:06.432 20:59:20 -- accel/accel.sh@20 -- # read -r var val 00:08:06.432 20:59:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:06.432 ************************************ 00:08:06.432 END TEST accel_decomp_mthread 00:08:06.432 ************************************ 00:08:06.432 20:59:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:06.433 20:59:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.433 00:08:06.433 real 0m4.687s 00:08:06.433 user 0m4.187s 00:08:06.433 sys 0m0.288s 00:08:06.433 20:59:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.433 20:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:06.433 20:59:20 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:06.433 20:59:20 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:06.433 20:59:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.433 20:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:06.433 ************************************ 00:08:06.433 START TEST accel_deomp_full_mthread 00:08:06.433 ************************************ 00:08:06.433 20:59:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:06.433 20:59:20 -- accel/accel.sh@16 -- # local accel_opc 00:08:06.433 20:59:20 -- accel/accel.sh@17 -- # local accel_module 00:08:06.433 20:59:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:06.433 20:59:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:06.433 20:59:20 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.433 20:59:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.433 20:59:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.433 20:59:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.433 20:59:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.433 20:59:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.433 20:59:20 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.433 20:59:20 -- accel/accel.sh@42 -- # jq -r . 00:08:06.433 [2024-07-13 20:59:20.206208] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:06.433 [2024-07-13 20:59:20.206359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60306 ] 00:08:06.692 [2024-07-13 20:59:20.381204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.692 [2024-07-13 20:59:20.581489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.224 20:59:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:09.224 00:08:09.224 SPDK Configuration: 00:08:09.224 Core mask: 0x1 00:08:09.224 00:08:09.224 Accel Perf Configuration: 00:08:09.224 Workload Type: decompress 00:08:09.224 Transfer size: 111250 bytes 00:08:09.224 Vector count 1 00:08:09.224 Module: software 00:08:09.224 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:09.224 Queue depth: 32 00:08:09.224 Allocate depth: 32 00:08:09.224 # threads/core: 2 00:08:09.224 Run time: 1 seconds 00:08:09.224 Verify: Yes 00:08:09.224 00:08:09.224 Running for 1 seconds... 00:08:09.224 00:08:09.224 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:09.224 ------------------------------------------------------------------------------------ 00:08:09.224 0,1 2016/s 83 MiB/s 0 0 00:08:09.224 0,0 1952/s 80 MiB/s 0 0 00:08:09.224 ==================================================================================== 00:08:09.224 Total 3968/s 420 MiB/s 0 0' 00:08:09.224 20:59:22 -- accel/accel.sh@20 -- # IFS=: 00:08:09.224 20:59:22 -- accel/accel.sh@20 -- # read -r var val 00:08:09.224 20:59:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.224 20:59:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.224 20:59:22 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.224 20:59:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.224 20:59:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.224 20:59:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.224 20:59:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.224 20:59:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.224 20:59:22 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.224 20:59:22 -- accel/accel.sh@42 -- # jq -r . 00:08:09.224 [2024-07-13 20:59:22.769857] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:09.225 [2024-07-13 20:59:22.770021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60342 ] 00:08:09.225 [2024-07-13 20:59:22.947980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.484 [2024-07-13 20:59:23.184432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val= 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val= 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val= 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val=0x1 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val= 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val= 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val=decompress 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val= 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val=software 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@23 -- # accel_module=software 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val=32 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val=32 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val=2 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val=Yes 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val= 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.484 20:59:23 -- accel/accel.sh@21 -- # val= 00:08:09.484 20:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.484 20:59:23 -- accel/accel.sh@20 -- # read -r var val 00:08:11.387 20:59:25 -- accel/accel.sh@21 -- # val= 00:08:11.387 20:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # IFS=: 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # read -r var val 00:08:11.387 20:59:25 -- accel/accel.sh@21 -- # val= 00:08:11.387 20:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # IFS=: 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # read -r var val 00:08:11.387 20:59:25 -- accel/accel.sh@21 -- # val= 00:08:11.387 20:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # IFS=: 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # read -r var val 00:08:11.387 20:59:25 -- accel/accel.sh@21 -- # val= 00:08:11.387 20:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # IFS=: 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # read -r var val 00:08:11.387 20:59:25 -- accel/accel.sh@21 -- # val= 00:08:11.387 20:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # IFS=: 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # read -r var val 00:08:11.387 20:59:25 -- accel/accel.sh@21 -- # val= 00:08:11.387 20:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # IFS=: 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # read -r var val 00:08:11.387 20:59:25 -- accel/accel.sh@21 -- # val= 00:08:11.387 20:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # IFS=: 00:08:11.387 20:59:25 -- accel/accel.sh@20 -- # read -r var val 00:08:11.387 20:59:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:11.387 20:59:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:11.387 20:59:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.387 00:08:11.387 real 0m5.154s 00:08:11.387 user 0m4.622s 00:08:11.387 sys 0m0.319s 00:08:11.387 20:59:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.387 ************************************ 00:08:11.387 END TEST accel_deomp_full_mthread 00:08:11.387 ************************************ 00:08:11.387 20:59:25 -- common/autotest_common.sh@10 -- # set +x 00:08:11.646 20:59:25 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:11.646 20:59:25 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:11.646 20:59:25 -- accel/accel.sh@129 -- # build_accel_config 00:08:11.646 20:59:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:11.646 20:59:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.646 20:59:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.646 20:59:25 -- common/autotest_common.sh@10 -- # set +x 00:08:11.646 20:59:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.646 20:59:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.646 20:59:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.646 20:59:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.646 20:59:25 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.646 20:59:25 -- accel/accel.sh@42 -- # jq -r . 00:08:11.646 ************************************ 00:08:11.646 START TEST accel_dif_functional_tests 00:08:11.646 ************************************ 00:08:11.646 20:59:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:11.646 [2024-07-13 20:59:25.446725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:11.646 [2024-07-13 20:59:25.446904] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60386 ] 00:08:11.905 [2024-07-13 20:59:25.609978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:11.905 [2024-07-13 20:59:25.814296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.905 [2024-07-13 20:59:25.814401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.905 [2024-07-13 20:59:25.814410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.474 00:08:12.474 00:08:12.474 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.474 http://cunit.sourceforge.net/ 00:08:12.474 00:08:12.474 00:08:12.474 Suite: accel_dif 00:08:12.474 Test: verify: DIF generated, GUARD check ...passed 00:08:12.474 Test: verify: DIF generated, APPTAG check ...passed 00:08:12.474 Test: verify: DIF generated, REFTAG check ...passed 00:08:12.474 Test: verify: DIF not generated, GUARD check ...[2024-07-13 20:59:26.104776] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:12.474 [2024-07-13 20:59:26.104906] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:12.474 passed 00:08:12.474 Test: verify: DIF not generated, APPTAG check ...passed 00:08:12.474 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 20:59:26.105116] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:12.474 [2024-07-13 20:59:26.105186] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:12.474 [2024-07-13 20:59:26.105242] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:12.474 passed 00:08:12.474 Test: verify: APPTAG correct, APPTAG check ...[2024-07-13 20:59:26.105357] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:12.474 passed 00:08:12.474 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:12.474 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-13 20:59:26.105471] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:12.474 passed 00:08:12.474 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:12.474 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:12.474 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 20:59:26.105960] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:12.474 passed 00:08:12.474 Test: generate copy: DIF generated, GUARD check ...passed 00:08:12.474 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:12.474 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:12.474 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:12.474 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:12.474 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:12.474 Test: generate copy: iovecs-len validate ...passed 00:08:12.474 Test: generate copy: buffer alignment validate ...[2024-07-13 20:59:26.106892] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:12.474 passed 00:08:12.474 00:08:12.474 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.474 suites 1 1 n/a 0 0 00:08:12.474 tests 20 20 20 0 0 00:08:12.474 asserts 204 204 204 0 n/a 00:08:12.474 00:08:12.474 Elapsed time = 0.007 seconds 00:08:13.412 ************************************ 00:08:13.412 END TEST accel_dif_functional_tests 00:08:13.412 ************************************ 00:08:13.412 00:08:13.412 real 0m1.894s 00:08:13.412 user 0m3.671s 00:08:13.412 sys 0m0.201s 00:08:13.412 20:59:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.412 20:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.412 ************************************ 00:08:13.412 END TEST accel 00:08:13.412 ************************************ 00:08:13.412 00:08:13.412 real 1m41.444s 00:08:13.412 user 1m51.940s 00:08:13.412 sys 0m7.639s 00:08:13.412 20:59:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.412 20:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.671 20:59:27 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:13.671 20:59:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:13.671 20:59:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.671 20:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.671 ************************************ 00:08:13.671 START TEST accel_rpc 00:08:13.671 ************************************ 00:08:13.671 20:59:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:13.671 * Looking for test storage... 00:08:13.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:13.671 20:59:27 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:13.671 20:59:27 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=60467 00:08:13.671 20:59:27 -- accel/accel_rpc.sh@15 -- # waitforlisten 60467 00:08:13.671 20:59:27 -- common/autotest_common.sh@819 -- # '[' -z 60467 ']' 00:08:13.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.671 20:59:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.671 20:59:27 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:13.671 20:59:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:13.671 20:59:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.671 20:59:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:13.671 20:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.671 [2024-07-13 20:59:27.556126] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:13.671 [2024-07-13 20:59:27.556329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60467 ] 00:08:13.930 [2024-07-13 20:59:27.731152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.190 [2024-07-13 20:59:27.928908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.190 [2024-07-13 20:59:27.929246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.758 20:59:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:14.758 20:59:28 -- common/autotest_common.sh@852 -- # return 0 00:08:14.758 20:59:28 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:14.758 20:59:28 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:14.758 20:59:28 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:14.758 20:59:28 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:14.758 20:59:28 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:14.758 20:59:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:14.758 20:59:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.758 20:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:14.758 ************************************ 00:08:14.758 START TEST accel_assign_opcode 00:08:14.758 ************************************ 00:08:14.758 20:59:28 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:08:14.758 20:59:28 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:14.758 20:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.758 20:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:14.758 [2024-07-13 20:59:28.454249] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:14.758 20:59:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.758 20:59:28 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:14.758 20:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.758 20:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:14.758 [2024-07-13 20:59:28.462211] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:14.758 20:59:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.758 20:59:28 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:14.758 20:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.758 20:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.326 20:59:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.326 20:59:29 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:15.326 20:59:29 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:15.326 20:59:29 -- accel/accel_rpc.sh@42 -- # grep software 00:08:15.326 20:59:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.326 20:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:15.327 20:59:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.327 software 00:08:15.327 ************************************ 00:08:15.327 END TEST accel_assign_opcode 00:08:15.327 ************************************ 00:08:15.327 00:08:15.327 real 0m0.668s 00:08:15.327 user 0m0.066s 00:08:15.327 sys 0m0.007s 00:08:15.327 20:59:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.327 20:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:15.327 20:59:29 -- accel/accel_rpc.sh@55 -- # killprocess 60467 00:08:15.327 20:59:29 -- common/autotest_common.sh@926 -- # '[' -z 60467 ']' 00:08:15.327 20:59:29 -- common/autotest_common.sh@930 -- # kill -0 60467 00:08:15.327 20:59:29 -- common/autotest_common.sh@931 -- # uname 00:08:15.327 20:59:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:15.327 20:59:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60467 00:08:15.327 20:59:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:15.327 killing process with pid 60467 00:08:15.327 20:59:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:15.327 20:59:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60467' 00:08:15.327 20:59:29 -- common/autotest_common.sh@945 -- # kill 60467 00:08:15.327 20:59:29 -- common/autotest_common.sh@950 -- # wait 60467 00:08:17.231 00:08:17.231 real 0m3.594s 00:08:17.231 user 0m3.650s 00:08:17.231 sys 0m0.448s 00:08:17.231 ************************************ 00:08:17.231 END TEST accel_rpc 00:08:17.231 ************************************ 00:08:17.231 20:59:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.231 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:08:17.231 20:59:30 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:17.231 20:59:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:17.231 20:59:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.231 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:08:17.231 ************************************ 00:08:17.231 START TEST app_cmdline 00:08:17.231 ************************************ 00:08:17.231 20:59:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:17.231 * Looking for test storage... 00:08:17.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:17.231 20:59:31 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:17.231 20:59:31 -- app/cmdline.sh@17 -- # spdk_tgt_pid=60577 00:08:17.231 20:59:31 -- app/cmdline.sh@18 -- # waitforlisten 60577 00:08:17.231 20:59:31 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:17.231 20:59:31 -- common/autotest_common.sh@819 -- # '[' -z 60577 ']' 00:08:17.231 20:59:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.231 20:59:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:17.231 20:59:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.231 20:59:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:17.231 20:59:31 -- common/autotest_common.sh@10 -- # set +x 00:08:17.490 [2024-07-13 20:59:31.170150] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:17.490 [2024-07-13 20:59:31.170317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60577 ] 00:08:17.490 [2024-07-13 20:59:31.330460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.750 [2024-07-13 20:59:31.500834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.750 [2024-07-13 20:59:31.501100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.318 20:59:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:18.318 20:59:32 -- common/autotest_common.sh@852 -- # return 0 00:08:18.318 20:59:32 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:18.578 { 00:08:18.578 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:08:18.578 "fields": { 00:08:18.578 "major": 24, 00:08:18.578 "minor": 1, 00:08:18.578 "patch": 1, 00:08:18.578 "suffix": "-pre", 00:08:18.578 "commit": "4b94202c6" 00:08:18.578 } 00:08:18.578 } 00:08:18.578 20:59:32 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:18.578 20:59:32 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:18.578 20:59:32 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:18.578 20:59:32 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:18.578 20:59:32 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:18.578 20:59:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.578 20:59:32 -- common/autotest_common.sh@10 -- # set +x 00:08:18.578 20:59:32 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:18.578 20:59:32 -- app/cmdline.sh@26 -- # sort 00:08:18.578 20:59:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.578 20:59:32 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:18.578 20:59:32 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:18.578 20:59:32 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.578 20:59:32 -- common/autotest_common.sh@640 -- # local es=0 00:08:18.578 20:59:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.578 20:59:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.578 20:59:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:18.578 20:59:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.578 20:59:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:18.578 20:59:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.578 20:59:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:18.578 20:59:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.578 20:59:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:18.578 20:59:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.847 request: 00:08:18.847 { 00:08:18.847 "method": "env_dpdk_get_mem_stats", 00:08:18.847 "req_id": 1 00:08:18.847 } 00:08:18.847 Got JSON-RPC error response 00:08:18.847 response: 00:08:18.847 { 00:08:18.847 "code": -32601, 00:08:18.847 "message": "Method not found" 00:08:18.847 } 00:08:18.847 20:59:32 -- common/autotest_common.sh@643 -- # es=1 00:08:18.847 20:59:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:18.847 20:59:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:18.847 20:59:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:18.847 20:59:32 -- app/cmdline.sh@1 -- # killprocess 60577 00:08:18.847 20:59:32 -- common/autotest_common.sh@926 -- # '[' -z 60577 ']' 00:08:18.847 20:59:32 -- common/autotest_common.sh@930 -- # kill -0 60577 00:08:18.847 20:59:32 -- common/autotest_common.sh@931 -- # uname 00:08:18.847 20:59:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:18.847 20:59:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60577 00:08:18.847 killing process with pid 60577 00:08:18.847 20:59:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:18.847 20:59:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:18.847 20:59:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60577' 00:08:18.847 20:59:32 -- common/autotest_common.sh@945 -- # kill 60577 00:08:18.847 20:59:32 -- common/autotest_common.sh@950 -- # wait 60577 00:08:20.759 00:08:20.759 real 0m3.505s 00:08:20.759 user 0m4.024s 00:08:20.759 sys 0m0.469s 00:08:20.759 20:59:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.760 ************************************ 00:08:20.760 20:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:20.760 END TEST app_cmdline 00:08:20.760 ************************************ 00:08:20.760 20:59:34 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:20.760 20:59:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.760 20:59:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.760 20:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:20.760 ************************************ 00:08:20.760 START TEST version 00:08:20.760 ************************************ 00:08:20.760 20:59:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:20.760 * Looking for test storage... 00:08:20.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:20.760 20:59:34 -- app/version.sh@17 -- # get_header_version major 00:08:20.760 20:59:34 -- app/version.sh@14 -- # cut -f2 00:08:20.760 20:59:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:20.760 20:59:34 -- app/version.sh@14 -- # tr -d '"' 00:08:20.760 20:59:34 -- app/version.sh@17 -- # major=24 00:08:20.760 20:59:34 -- app/version.sh@18 -- # get_header_version minor 00:08:20.760 20:59:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:20.760 20:59:34 -- app/version.sh@14 -- # cut -f2 00:08:20.760 20:59:34 -- app/version.sh@14 -- # tr -d '"' 00:08:20.760 20:59:34 -- app/version.sh@18 -- # minor=1 00:08:20.760 20:59:34 -- app/version.sh@19 -- # get_header_version patch 00:08:20.760 20:59:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:20.760 20:59:34 -- app/version.sh@14 -- # cut -f2 00:08:20.760 20:59:34 -- app/version.sh@14 -- # tr -d '"' 00:08:20.760 20:59:34 -- app/version.sh@19 -- # patch=1 00:08:20.760 20:59:34 -- app/version.sh@20 -- # get_header_version suffix 00:08:20.760 20:59:34 -- app/version.sh@14 -- # cut -f2 00:08:20.760 20:59:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:20.760 20:59:34 -- app/version.sh@14 -- # tr -d '"' 00:08:20.760 20:59:34 -- app/version.sh@20 -- # suffix=-pre 00:08:20.760 20:59:34 -- app/version.sh@22 -- # version=24.1 00:08:20.760 20:59:34 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:20.760 20:59:34 -- app/version.sh@25 -- # version=24.1.1 00:08:20.760 20:59:34 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:20.760 20:59:34 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:20.760 20:59:34 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:21.031 20:59:34 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:21.031 20:59:34 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:21.031 00:08:21.031 real 0m0.158s 00:08:21.031 user 0m0.098s 00:08:21.031 sys 0m0.091s 00:08:21.031 20:59:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.031 ************************************ 00:08:21.031 END TEST version 00:08:21.031 ************************************ 00:08:21.031 20:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:21.032 20:59:34 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:08:21.032 20:59:34 -- spdk/autotest.sh@204 -- # uname -s 00:08:21.032 20:59:34 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:08:21.032 20:59:34 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:21.032 20:59:34 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:21.032 20:59:34 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:08:21.032 20:59:34 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:21.032 20:59:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:21.032 20:59:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:21.032 20:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:21.032 ************************************ 00:08:21.032 START TEST blockdev_nvme 00:08:21.032 ************************************ 00:08:21.032 20:59:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:21.032 * Looking for test storage... 00:08:21.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:21.032 20:59:34 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:21.032 20:59:34 -- bdev/nbd_common.sh@6 -- # set -e 00:08:21.032 20:59:34 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:21.032 20:59:34 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:21.032 20:59:34 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:21.032 20:59:34 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:21.032 20:59:34 -- bdev/blockdev.sh@18 -- # : 00:08:21.032 20:59:34 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:21.032 20:59:34 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:21.032 20:59:34 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:21.032 20:59:34 -- bdev/blockdev.sh@672 -- # uname -s 00:08:21.032 20:59:34 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:21.032 20:59:34 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:21.032 20:59:34 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:08:21.032 20:59:34 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:21.032 20:59:34 -- bdev/blockdev.sh@682 -- # dek= 00:08:21.032 20:59:34 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:21.032 20:59:34 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:21.032 20:59:34 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:21.032 20:59:34 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:08:21.032 20:59:34 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:08:21.032 20:59:34 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:21.032 20:59:34 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60736 00:08:21.032 20:59:34 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:21.032 20:59:34 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:21.032 20:59:34 -- bdev/blockdev.sh@47 -- # waitforlisten 60736 00:08:21.032 20:59:34 -- common/autotest_common.sh@819 -- # '[' -z 60736 ']' 00:08:21.032 20:59:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.032 20:59:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:21.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.032 20:59:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.032 20:59:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:21.032 20:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:21.290 [2024-07-13 20:59:34.953674] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:21.290 [2024-07-13 20:59:34.953816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60736 ] 00:08:21.290 [2024-07-13 20:59:35.115533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.549 [2024-07-13 20:59:35.277356] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:21.549 [2024-07-13 20:59:35.277573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.927 20:59:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:22.927 20:59:36 -- common/autotest_common.sh@852 -- # return 0 00:08:22.927 20:59:36 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:22.927 20:59:36 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:08:22.927 20:59:36 -- bdev/blockdev.sh@79 -- # local json 00:08:22.927 20:59:36 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:22.927 20:59:36 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:22.927 20:59:36 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:22.927 20:59:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.927 20:59:36 -- common/autotest_common.sh@10 -- # set +x 00:08:23.187 20:59:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.187 20:59:36 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:23.187 20:59:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.187 20:59:36 -- common/autotest_common.sh@10 -- # set +x 00:08:23.187 20:59:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.187 20:59:36 -- bdev/blockdev.sh@738 -- # cat 00:08:23.187 20:59:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:23.187 20:59:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.187 20:59:36 -- common/autotest_common.sh@10 -- # set +x 00:08:23.187 20:59:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.187 20:59:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:23.187 20:59:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.187 20:59:36 -- common/autotest_common.sh@10 -- # set +x 00:08:23.187 20:59:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.187 20:59:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:23.187 20:59:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.187 20:59:36 -- common/autotest_common.sh@10 -- # set +x 00:08:23.187 20:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.187 20:59:37 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:23.187 20:59:37 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:23.187 20:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.187 20:59:37 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:23.187 20:59:37 -- common/autotest_common.sh@10 -- # set +x 00:08:23.187 20:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.187 20:59:37 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:23.187 20:59:37 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:23.188 20:59:37 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ba53e552-a2a5-4e3c-8c2f-12cc23cabceb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ba53e552-a2a5-4e3c-8c2f-12cc23cabceb",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2352bddb-7406-462f-acec-e9cb091f0ca0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2352bddb-7406-462f-acec-e9cb091f0ca0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e5fa2586-f65f-4ae9-af59-f12828a66029"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e5fa2586-f65f-4ae9-af59-f12828a66029",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4912c83d-eb53-447c-ac62-a7a87f55d314"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4912c83d-eb53-447c-ac62-a7a87f55d314",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "dbbfce86-f944-4d89-b078-d97ab53d3a8b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dbbfce86-f944-4d89-b078-d97ab53d3a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "26c50a80-89be-43fc-8c0d-b19a12e532b1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "26c50a80-89be-43fc-8c0d-b19a12e532b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:23.448 20:59:37 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:23.448 20:59:37 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:08:23.448 20:59:37 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:23.448 20:59:37 -- bdev/blockdev.sh@752 -- # killprocess 60736 00:08:23.448 20:59:37 -- common/autotest_common.sh@926 -- # '[' -z 60736 ']' 00:08:23.448 20:59:37 -- common/autotest_common.sh@930 -- # kill -0 60736 00:08:23.448 20:59:37 -- common/autotest_common.sh@931 -- # uname 00:08:23.448 20:59:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:23.448 20:59:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60736 00:08:23.448 20:59:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:23.448 killing process with pid 60736 00:08:23.448 20:59:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:23.448 20:59:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60736' 00:08:23.448 20:59:37 -- common/autotest_common.sh@945 -- # kill 60736 00:08:23.448 20:59:37 -- common/autotest_common.sh@950 -- # wait 60736 00:08:25.983 20:59:39 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:25.983 20:59:39 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:25.983 20:59:39 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:25.983 20:59:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.983 20:59:39 -- common/autotest_common.sh@10 -- # set +x 00:08:25.983 ************************************ 00:08:25.983 START TEST bdev_hello_world 00:08:25.983 ************************************ 00:08:25.983 20:59:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:25.983 [2024-07-13 20:59:39.485990] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:25.983 [2024-07-13 20:59:39.486170] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60839 ] 00:08:25.983 [2024-07-13 20:59:39.657460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.983 [2024-07-13 20:59:39.860667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.560 [2024-07-13 20:59:40.477429] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:26.560 [2024-07-13 20:59:40.477483] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:26.560 [2024-07-13 20:59:40.477513] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:26.560 [2024-07-13 20:59:40.480877] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:26.560 [2024-07-13 20:59:40.481520] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:26.560 [2024-07-13 20:59:40.481548] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:26.560 [2024-07-13 20:59:40.481786] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:26.560 00:08:26.560 [2024-07-13 20:59:40.481832] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:27.936 00:08:27.936 real 0m2.164s 00:08:27.936 user 0m1.825s 00:08:27.936 sys 0m0.229s 00:08:27.936 20:59:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.936 ************************************ 00:08:27.936 END TEST bdev_hello_world 00:08:27.936 ************************************ 00:08:27.936 20:59:41 -- common/autotest_common.sh@10 -- # set +x 00:08:27.936 20:59:41 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:27.936 20:59:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:27.937 20:59:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:27.937 20:59:41 -- common/autotest_common.sh@10 -- # set +x 00:08:27.937 ************************************ 00:08:27.937 START TEST bdev_bounds 00:08:27.937 ************************************ 00:08:27.937 20:59:41 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:08:27.937 20:59:41 -- bdev/blockdev.sh@288 -- # bdevio_pid=60881 00:08:27.937 20:59:41 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:27.937 Process bdevio pid: 60881 00:08:27.937 20:59:41 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:27.937 20:59:41 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60881' 00:08:27.937 20:59:41 -- bdev/blockdev.sh@291 -- # waitforlisten 60881 00:08:27.937 20:59:41 -- common/autotest_common.sh@819 -- # '[' -z 60881 ']' 00:08:27.937 20:59:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.937 20:59:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:27.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.937 20:59:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.937 20:59:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:27.937 20:59:41 -- common/autotest_common.sh@10 -- # set +x 00:08:27.937 [2024-07-13 20:59:41.721601] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:27.937 [2024-07-13 20:59:41.721785] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60881 ] 00:08:28.195 [2024-07-13 20:59:41.896551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.195 [2024-07-13 20:59:42.116059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.195 [2024-07-13 20:59:42.116169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.195 [2024-07-13 20:59:42.116421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.569 20:59:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:29.569 20:59:43 -- common/autotest_common.sh@852 -- # return 0 00:08:29.569 20:59:43 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:29.827 I/O targets: 00:08:29.827 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:29.827 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:29.827 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:29.827 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:29.827 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:29.827 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:29.827 00:08:29.827 00:08:29.827 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.827 http://cunit.sourceforge.net/ 00:08:29.827 00:08:29.827 00:08:29.827 Suite: bdevio tests on: Nvme3n1 00:08:29.827 Test: blockdev write read block ...passed 00:08:29.827 Test: blockdev write zeroes read block ...passed 00:08:29.827 Test: blockdev write zeroes read no split ...passed 00:08:29.827 Test: blockdev write zeroes read split ...passed 00:08:29.827 Test: blockdev write zeroes read split partial ...passed 00:08:29.827 Test: blockdev reset ...[2024-07-13 20:59:43.611881] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:29.827 [2024-07-13 20:59:43.615575] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:29.827 passed 00:08:29.827 Test: blockdev write read 8 blocks ...passed 00:08:29.827 Test: blockdev write read size > 128k ...passed 00:08:29.827 Test: blockdev write read invalid size ...passed 00:08:29.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:29.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:29.827 Test: blockdev write read max offset ...passed 00:08:29.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:29.827 Test: blockdev writev readv 8 blocks ...passed 00:08:29.827 Test: blockdev writev readv 30 x 1block ...passed 00:08:29.827 Test: blockdev writev readv block ...passed 00:08:29.827 Test: blockdev writev readv size > 128k ...passed 00:08:29.827 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:29.827 Test: blockdev comparev and writev ...[2024-07-13 20:59:43.623715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27cc0e000 len:0x1000 00:08:29.827 [2024-07-13 20:59:43.623779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:29.827 passed 00:08:29.827 Test: blockdev nvme passthru rw ...passed 00:08:29.827 Test: blockdev nvme passthru vendor specific ...[2024-07-13 20:59:43.624543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:29.827 [2024-07-13 20:59:43.624585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:29.827 passed 00:08:29.827 Test: blockdev nvme admin passthru ...passed 00:08:29.827 Test: blockdev copy ...passed 00:08:29.827 Suite: bdevio tests on: Nvme2n3 00:08:29.827 Test: blockdev write read block ...passed 00:08:29.827 Test: blockdev write zeroes read block ...passed 00:08:29.827 Test: blockdev write zeroes read no split ...passed 00:08:29.827 Test: blockdev write zeroes read split ...passed 00:08:29.827 Test: blockdev write zeroes read split partial ...passed 00:08:29.827 Test: blockdev reset ...[2024-07-13 20:59:43.693973] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:29.827 [2024-07-13 20:59:43.698310] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:29.827 passed 00:08:29.827 Test: blockdev write read 8 blocks ...passed 00:08:29.827 Test: blockdev write read size > 128k ...passed 00:08:29.827 Test: blockdev write read invalid size ...passed 00:08:29.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:29.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:29.827 Test: blockdev write read max offset ...passed 00:08:29.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:29.827 Test: blockdev writev readv 8 blocks ...passed 00:08:29.827 Test: blockdev writev readv 30 x 1block ...passed 00:08:29.827 Test: blockdev writev readv block ...passed 00:08:29.827 Test: blockdev writev readv size > 128k ...passed 00:08:29.827 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:29.827 Test: blockdev comparev and writev ...[2024-07-13 20:59:43.706120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27cc0a000 len:0x1000 00:08:29.827 [2024-07-13 20:59:43.706177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:29.827 passed 00:08:29.827 Test: blockdev nvme passthru rw ...passed 00:08:29.827 Test: blockdev nvme passthru vendor specific ...[2024-07-13 20:59:43.706976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:29.827 [2024-07-13 20:59:43.707016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:29.827 passed 00:08:29.827 Test: blockdev nvme admin passthru ...passed 00:08:29.827 Test: blockdev copy ...passed 00:08:29.827 Suite: bdevio tests on: Nvme2n2 00:08:29.827 Test: blockdev write read block ...passed 00:08:29.827 Test: blockdev write zeroes read block ...passed 00:08:29.827 Test: blockdev write zeroes read no split ...passed 00:08:29.827 Test: blockdev write zeroes read split ...passed 00:08:30.086 Test: blockdev write zeroes read split partial ...passed 00:08:30.086 Test: blockdev reset ...[2024-07-13 20:59:43.777556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:30.086 [2024-07-13 20:59:43.781826] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.086 passed 00:08:30.086 Test: blockdev write read 8 blocks ...passed 00:08:30.086 Test: blockdev write read size > 128k ...passed 00:08:30.086 Test: blockdev write read invalid size ...passed 00:08:30.086 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.086 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.086 Test: blockdev write read max offset ...passed 00:08:30.086 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.086 Test: blockdev writev readv 8 blocks ...passed 00:08:30.086 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.086 Test: blockdev writev readv block ...passed 00:08:30.086 Test: blockdev writev readv size > 128k ...passed 00:08:30.086 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.086 Test: blockdev comparev and writev ...[2024-07-13 20:59:43.789440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x270e06000 len:0x1000 00:08:30.086 [2024-07-13 20:59:43.789497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.086 passed 00:08:30.086 Test: blockdev nvme passthru rw ...passed 00:08:30.086 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.086 Test: blockdev nvme admin passthru ...[2024-07-13 20:59:43.790285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:30.086 [2024-07-13 20:59:43.790340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:30.086 passed 00:08:30.086 Test: blockdev copy ...passed 00:08:30.086 Suite: bdevio tests on: Nvme2n1 00:08:30.086 Test: blockdev write read block ...passed 00:08:30.086 Test: blockdev write zeroes read block ...passed 00:08:30.086 Test: blockdev write zeroes read no split ...passed 00:08:30.086 Test: blockdev write zeroes read split ...passed 00:08:30.086 Test: blockdev write zeroes read split partial ...passed 00:08:30.086 Test: blockdev reset ...[2024-07-13 20:59:43.859682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:30.086 [2024-07-13 20:59:43.863465] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.086 passed 00:08:30.086 Test: blockdev write read 8 blocks ...passed 00:08:30.086 Test: blockdev write read size > 128k ...passed 00:08:30.086 Test: blockdev write read invalid size ...passed 00:08:30.086 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.086 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.086 Test: blockdev write read max offset ...passed 00:08:30.086 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.086 Test: blockdev writev readv 8 blocks ...passed 00:08:30.086 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.086 Test: blockdev writev readv block ...passed 00:08:30.086 Test: blockdev writev readv size > 128k ...passed 00:08:30.086 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.086 Test: blockdev comparev and writev ...[2024-07-13 20:59:43.871105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x270e01000 len:0x1000 00:08:30.086 [2024-07-13 20:59:43.871163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.086 passed 00:08:30.086 Test: blockdev nvme passthru rw ...passed 00:08:30.086 Test: blockdev nvme passthru vendor specific ...[2024-07-13 20:59:43.871979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:30.086 [2024-07-13 20:59:43.872021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:30.086 passed 00:08:30.086 Test: blockdev nvme admin passthru ...passed 00:08:30.086 Test: blockdev copy ...passed 00:08:30.086 Suite: bdevio tests on: Nvme1n1 00:08:30.086 Test: blockdev write read block ...passed 00:08:30.086 Test: blockdev write zeroes read block ...passed 00:08:30.086 Test: blockdev write zeroes read no split ...passed 00:08:30.086 Test: blockdev write zeroes read split ...passed 00:08:30.086 Test: blockdev write zeroes read split partial ...passed 00:08:30.086 Test: blockdev reset ...[2024-07-13 20:59:43.943415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:30.086 [2024-07-13 20:59:43.947243] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.086 passed 00:08:30.086 Test: blockdev write read 8 blocks ...passed 00:08:30.086 Test: blockdev write read size > 128k ...passed 00:08:30.086 Test: blockdev write read invalid size ...passed 00:08:30.086 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.086 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.087 Test: blockdev write read max offset ...passed 00:08:30.087 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.087 Test: blockdev writev readv 8 blocks ...passed 00:08:30.087 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.087 Test: blockdev writev readv block ...passed 00:08:30.087 Test: blockdev writev readv size > 128k ...passed 00:08:30.087 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.087 Test: blockdev comparev and writev ...[2024-07-13 20:59:43.954763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x280e06000 len:0x1000 00:08:30.087 [2024-07-13 20:59:43.954817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.087 passed 00:08:30.087 Test: blockdev nvme passthru rw ...passed 00:08:30.087 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.087 Test: blockdev nvme admin passthru ...[2024-07-13 20:59:43.955740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:30.087 [2024-07-13 20:59:43.955780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:30.087 passed 00:08:30.087 Test: blockdev copy ...passed 00:08:30.087 Suite: bdevio tests on: Nvme0n1 00:08:30.087 Test: blockdev write read block ...passed 00:08:30.087 Test: blockdev write zeroes read block ...passed 00:08:30.087 Test: blockdev write zeroes read no split ...passed 00:08:30.087 Test: blockdev write zeroes read split ...passed 00:08:30.345 Test: blockdev write zeroes read split partial ...passed 00:08:30.345 Test: blockdev reset ...[2024-07-13 20:59:44.034059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:30.345 [2024-07-13 20:59:44.037795] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.345 passed 00:08:30.345 Test: blockdev write read 8 blocks ...passed 00:08:30.345 Test: blockdev write read size > 128k ...passed 00:08:30.345 Test: blockdev write read invalid size ...passed 00:08:30.346 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.346 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.346 Test: blockdev write read max offset ...passed 00:08:30.346 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.346 Test: blockdev writev readv 8 blocks ...passed 00:08:30.346 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.346 Test: blockdev writev readv block ...passed 00:08:30.346 Test: blockdev writev readv size > 128k ...passed 00:08:30.346 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.346 Test: blockdev comparev and writev ...[2024-07-13 20:59:44.045258] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:30.346 separate metadata which is not supported yet. 00:08:30.346 passed 00:08:30.346 Test: blockdev nvme passthru rw ...passed 00:08:30.346 Test: blockdev nvme passthru vendor specific ...[2024-07-13 20:59:44.045727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:30.346 [2024-07-13 20:59:44.045789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:30.346 passed 00:08:30.346 Test: blockdev nvme admin passthru ...passed 00:08:30.346 Test: blockdev copy ...passed 00:08:30.346 00:08:30.346 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.346 suites 6 6 n/a 0 0 00:08:30.346 tests 138 138 138 0 0 00:08:30.346 asserts 893 893 893 0 n/a 00:08:30.346 00:08:30.346 Elapsed time = 1.372 seconds 00:08:30.346 0 00:08:30.346 20:59:44 -- bdev/blockdev.sh@293 -- # killprocess 60881 00:08:30.346 20:59:44 -- common/autotest_common.sh@926 -- # '[' -z 60881 ']' 00:08:30.346 20:59:44 -- common/autotest_common.sh@930 -- # kill -0 60881 00:08:30.346 20:59:44 -- common/autotest_common.sh@931 -- # uname 00:08:30.346 20:59:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:30.346 20:59:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60881 00:08:30.346 20:59:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:30.346 killing process with pid 60881 00:08:30.346 20:59:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:30.346 20:59:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60881' 00:08:30.346 20:59:44 -- common/autotest_common.sh@945 -- # kill 60881 00:08:30.346 20:59:44 -- common/autotest_common.sh@950 -- # wait 60881 00:08:31.281 20:59:45 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:31.281 00:08:31.281 real 0m3.467s 00:08:31.281 user 0m9.077s 00:08:31.281 sys 0m0.434s 00:08:31.281 20:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.282 20:59:45 -- common/autotest_common.sh@10 -- # set +x 00:08:31.282 ************************************ 00:08:31.282 END TEST bdev_bounds 00:08:31.282 ************************************ 00:08:31.282 20:59:45 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:31.282 20:59:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:31.282 20:59:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.282 20:59:45 -- common/autotest_common.sh@10 -- # set +x 00:08:31.282 ************************************ 00:08:31.282 START TEST bdev_nbd 00:08:31.282 ************************************ 00:08:31.282 20:59:45 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:31.282 20:59:45 -- bdev/blockdev.sh@298 -- # uname -s 00:08:31.282 20:59:45 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:31.282 20:59:45 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.282 20:59:45 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:31.282 20:59:45 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:31.282 20:59:45 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:31.282 20:59:45 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:08:31.282 20:59:45 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:31.282 20:59:45 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:31.282 20:59:45 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:31.282 20:59:45 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:08:31.282 20:59:45 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:31.282 20:59:45 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:31.282 20:59:45 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:31.282 20:59:45 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:31.282 20:59:45 -- bdev/blockdev.sh@316 -- # nbd_pid=60958 00:08:31.282 20:59:45 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:31.282 20:59:45 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:31.282 20:59:45 -- bdev/blockdev.sh@318 -- # waitforlisten 60958 /var/tmp/spdk-nbd.sock 00:08:31.282 20:59:45 -- common/autotest_common.sh@819 -- # '[' -z 60958 ']' 00:08:31.282 20:59:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:31.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:31.282 20:59:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:31.282 20:59:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:31.282 20:59:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:31.282 20:59:45 -- common/autotest_common.sh@10 -- # set +x 00:08:31.540 [2024-07-13 20:59:45.237951] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:31.540 [2024-07-13 20:59:45.238125] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.540 [2024-07-13 20:59:45.405577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.798 [2024-07-13 20:59:45.571350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.175 20:59:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:33.175 20:59:46 -- common/autotest_common.sh@852 -- # return 0 00:08:33.175 20:59:46 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:33.175 20:59:46 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.175 20:59:46 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:33.175 20:59:46 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:33.175 20:59:46 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:33.175 20:59:46 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.175 20:59:46 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:33.175 20:59:46 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:33.175 20:59:46 -- bdev/nbd_common.sh@24 -- # local i 00:08:33.175 20:59:46 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:33.175 20:59:46 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:33.175 20:59:46 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:33.175 20:59:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:33.434 20:59:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:33.434 20:59:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:33.434 20:59:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:33.434 20:59:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:33.434 20:59:47 -- common/autotest_common.sh@857 -- # local i 00:08:33.434 20:59:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:33.434 20:59:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:33.434 20:59:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:33.434 20:59:47 -- common/autotest_common.sh@861 -- # break 00:08:33.434 20:59:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:33.434 20:59:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:33.434 20:59:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:33.434 1+0 records in 00:08:33.434 1+0 records out 00:08:33.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469074 s, 8.7 MB/s 00:08:33.434 20:59:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.434 20:59:47 -- common/autotest_common.sh@874 -- # size=4096 00:08:33.434 20:59:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.434 20:59:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:33.434 20:59:47 -- common/autotest_common.sh@877 -- # return 0 00:08:33.434 20:59:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:33.434 20:59:47 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:33.434 20:59:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:33.693 20:59:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:33.693 20:59:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:33.693 20:59:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:33.693 20:59:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:33.693 20:59:47 -- common/autotest_common.sh@857 -- # local i 00:08:33.693 20:59:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:33.693 20:59:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:33.693 20:59:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:33.693 20:59:47 -- common/autotest_common.sh@861 -- # break 00:08:33.693 20:59:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:33.693 20:59:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:33.693 20:59:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:33.693 1+0 records in 00:08:33.693 1+0 records out 00:08:33.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527363 s, 7.8 MB/s 00:08:33.693 20:59:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.693 20:59:47 -- common/autotest_common.sh@874 -- # size=4096 00:08:33.693 20:59:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.693 20:59:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:33.693 20:59:47 -- common/autotest_common.sh@877 -- # return 0 00:08:33.693 20:59:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:33.693 20:59:47 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:33.693 20:59:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:33.952 20:59:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:33.952 20:59:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:33.952 20:59:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:33.952 20:59:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:08:33.952 20:59:47 -- common/autotest_common.sh@857 -- # local i 00:08:33.952 20:59:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:33.952 20:59:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:33.952 20:59:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:08:33.952 20:59:47 -- common/autotest_common.sh@861 -- # break 00:08:33.952 20:59:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:33.952 20:59:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:33.952 20:59:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:33.952 1+0 records in 00:08:33.952 1+0 records out 00:08:33.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735353 s, 5.6 MB/s 00:08:33.952 20:59:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.952 20:59:47 -- common/autotest_common.sh@874 -- # size=4096 00:08:33.952 20:59:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.952 20:59:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:33.952 20:59:47 -- common/autotest_common.sh@877 -- # return 0 00:08:33.952 20:59:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:33.952 20:59:47 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:33.952 20:59:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:34.215 20:59:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:34.215 20:59:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:34.215 20:59:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:34.215 20:59:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:08:34.215 20:59:47 -- common/autotest_common.sh@857 -- # local i 00:08:34.215 20:59:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:34.215 20:59:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:34.215 20:59:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:08:34.215 20:59:47 -- common/autotest_common.sh@861 -- # break 00:08:34.215 20:59:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:34.215 20:59:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:34.215 20:59:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.215 1+0 records in 00:08:34.215 1+0 records out 00:08:34.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127548 s, 3.2 MB/s 00:08:34.215 20:59:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.215 20:59:47 -- common/autotest_common.sh@874 -- # size=4096 00:08:34.215 20:59:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.215 20:59:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:34.215 20:59:47 -- common/autotest_common.sh@877 -- # return 0 00:08:34.215 20:59:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.215 20:59:47 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:34.215 20:59:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:34.478 20:59:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:34.478 20:59:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:34.478 20:59:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:34.478 20:59:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:08:34.478 20:59:48 -- common/autotest_common.sh@857 -- # local i 00:08:34.478 20:59:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:34.478 20:59:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:34.478 20:59:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:08:34.478 20:59:48 -- common/autotest_common.sh@861 -- # break 00:08:34.478 20:59:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:34.478 20:59:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:34.478 20:59:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.478 1+0 records in 00:08:34.478 1+0 records out 00:08:34.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064717 s, 6.3 MB/s 00:08:34.478 20:59:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.478 20:59:48 -- common/autotest_common.sh@874 -- # size=4096 00:08:34.478 20:59:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.478 20:59:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:34.478 20:59:48 -- common/autotest_common.sh@877 -- # return 0 00:08:34.478 20:59:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.478 20:59:48 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:34.478 20:59:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:34.738 20:59:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:34.738 20:59:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:34.738 20:59:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:34.738 20:59:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:08:34.738 20:59:48 -- common/autotest_common.sh@857 -- # local i 00:08:34.738 20:59:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:34.738 20:59:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:34.738 20:59:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:08:34.738 20:59:48 -- common/autotest_common.sh@861 -- # break 00:08:34.738 20:59:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:34.738 20:59:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:34.738 20:59:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.738 1+0 records in 00:08:34.738 1+0 records out 00:08:34.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000757412 s, 5.4 MB/s 00:08:34.738 20:59:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.738 20:59:48 -- common/autotest_common.sh@874 -- # size=4096 00:08:34.738 20:59:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.738 20:59:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:34.738 20:59:48 -- common/autotest_common.sh@877 -- # return 0 00:08:34.738 20:59:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.738 20:59:48 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:34.738 20:59:48 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:34.997 20:59:48 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:34.997 { 00:08:34.997 "nbd_device": "/dev/nbd0", 00:08:34.997 "bdev_name": "Nvme0n1" 00:08:34.997 }, 00:08:34.997 { 00:08:34.997 "nbd_device": "/dev/nbd1", 00:08:34.997 "bdev_name": "Nvme1n1" 00:08:34.997 }, 00:08:34.997 { 00:08:34.997 "nbd_device": "/dev/nbd2", 00:08:34.997 "bdev_name": "Nvme2n1" 00:08:34.997 }, 00:08:34.997 { 00:08:34.997 "nbd_device": "/dev/nbd3", 00:08:34.997 "bdev_name": "Nvme2n2" 00:08:34.997 }, 00:08:34.997 { 00:08:34.997 "nbd_device": "/dev/nbd4", 00:08:34.997 "bdev_name": "Nvme2n3" 00:08:34.997 }, 00:08:34.997 { 00:08:34.997 "nbd_device": "/dev/nbd5", 00:08:34.997 "bdev_name": "Nvme3n1" 00:08:34.997 } 00:08:34.997 ]' 00:08:34.997 20:59:48 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:34.997 20:59:48 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:34.997 { 00:08:34.997 "nbd_device": "/dev/nbd0", 00:08:34.997 "bdev_name": "Nvme0n1" 00:08:34.997 }, 00:08:34.997 { 00:08:34.997 "nbd_device": "/dev/nbd1", 00:08:34.997 "bdev_name": "Nvme1n1" 00:08:34.997 }, 00:08:34.997 { 00:08:34.997 "nbd_device": "/dev/nbd2", 00:08:34.997 "bdev_name": "Nvme2n1" 00:08:34.997 }, 00:08:34.997 { 00:08:34.997 "nbd_device": "/dev/nbd3", 00:08:34.997 "bdev_name": "Nvme2n2" 00:08:34.997 }, 00:08:34.997 { 00:08:34.997 "nbd_device": "/dev/nbd4", 00:08:34.997 "bdev_name": "Nvme2n3" 00:08:34.997 }, 00:08:34.997 { 00:08:34.997 "nbd_device": "/dev/nbd5", 00:08:34.997 "bdev_name": "Nvme3n1" 00:08:34.997 } 00:08:34.997 ]' 00:08:34.997 20:59:48 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:34.997 20:59:48 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:34.997 20:59:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.997 20:59:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:34.997 20:59:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:34.997 20:59:48 -- bdev/nbd_common.sh@51 -- # local i 00:08:34.997 20:59:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:34.997 20:59:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:35.256 20:59:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:35.256 20:59:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:35.256 20:59:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:35.256 20:59:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:35.256 20:59:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:35.256 20:59:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:35.256 20:59:49 -- bdev/nbd_common.sh@41 -- # break 00:08:35.256 20:59:49 -- bdev/nbd_common.sh@45 -- # return 0 00:08:35.256 20:59:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.256 20:59:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:35.516 20:59:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:35.516 20:59:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:35.516 20:59:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:35.516 20:59:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:35.516 20:59:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:35.516 20:59:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:35.516 20:59:49 -- bdev/nbd_common.sh@41 -- # break 00:08:35.516 20:59:49 -- bdev/nbd_common.sh@45 -- # return 0 00:08:35.516 20:59:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.516 20:59:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:35.775 20:59:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:35.775 20:59:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:35.775 20:59:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:35.775 20:59:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:35.775 20:59:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:35.775 20:59:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:35.775 20:59:49 -- bdev/nbd_common.sh@41 -- # break 00:08:35.775 20:59:49 -- bdev/nbd_common.sh@45 -- # return 0 00:08:35.775 20:59:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.775 20:59:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:36.034 20:59:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:36.034 20:59:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:36.034 20:59:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:36.034 20:59:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.034 20:59:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.034 20:59:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:36.034 20:59:49 -- bdev/nbd_common.sh@41 -- # break 00:08:36.034 20:59:49 -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.034 20:59:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.034 20:59:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:36.292 20:59:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:36.292 20:59:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:36.292 20:59:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:36.292 20:59:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.292 20:59:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.292 20:59:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:36.292 20:59:50 -- bdev/nbd_common.sh@41 -- # break 00:08:36.292 20:59:50 -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.292 20:59:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.292 20:59:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:36.551 20:59:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:36.551 20:59:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:36.551 20:59:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:36.551 20:59:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.551 20:59:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.551 20:59:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:36.551 20:59:50 -- bdev/nbd_common.sh@41 -- # break 00:08:36.551 20:59:50 -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.551 20:59:50 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:36.551 20:59:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.551 20:59:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.809 20:59:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:36.809 20:59:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:36.809 20:59:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.809 20:59:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:36.809 20:59:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.809 20:59:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:36.809 20:59:50 -- bdev/nbd_common.sh@65 -- # true 00:08:36.809 20:59:50 -- bdev/nbd_common.sh@65 -- # count=0 00:08:36.809 20:59:50 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:36.809 20:59:50 -- bdev/nbd_common.sh@122 -- # count=0 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@127 -- # return 0 00:08:36.810 20:59:50 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@12 -- # local i 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:36.810 20:59:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:37.068 /dev/nbd0 00:08:37.068 20:59:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:37.068 20:59:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:37.068 20:59:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:37.068 20:59:50 -- common/autotest_common.sh@857 -- # local i 00:08:37.068 20:59:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:37.068 20:59:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:37.068 20:59:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:37.068 20:59:50 -- common/autotest_common.sh@861 -- # break 00:08:37.068 20:59:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:37.068 20:59:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:37.068 20:59:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.068 1+0 records in 00:08:37.068 1+0 records out 00:08:37.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423173 s, 9.7 MB/s 00:08:37.068 20:59:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.068 20:59:50 -- common/autotest_common.sh@874 -- # size=4096 00:08:37.068 20:59:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.068 20:59:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:37.068 20:59:50 -- common/autotest_common.sh@877 -- # return 0 00:08:37.068 20:59:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.068 20:59:50 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:37.068 20:59:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:37.326 /dev/nbd1 00:08:37.326 20:59:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:37.326 20:59:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:37.326 20:59:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:37.326 20:59:51 -- common/autotest_common.sh@857 -- # local i 00:08:37.326 20:59:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:37.326 20:59:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:37.326 20:59:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:37.326 20:59:51 -- common/autotest_common.sh@861 -- # break 00:08:37.326 20:59:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:37.326 20:59:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:37.326 20:59:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.326 1+0 records in 00:08:37.326 1+0 records out 00:08:37.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834136 s, 4.9 MB/s 00:08:37.326 20:59:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.326 20:59:51 -- common/autotest_common.sh@874 -- # size=4096 00:08:37.326 20:59:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.326 20:59:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:37.326 20:59:51 -- common/autotest_common.sh@877 -- # return 0 00:08:37.326 20:59:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.326 20:59:51 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:37.326 20:59:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:37.326 /dev/nbd10 00:08:37.585 20:59:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:37.585 20:59:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:37.585 20:59:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:08:37.585 20:59:51 -- common/autotest_common.sh@857 -- # local i 00:08:37.585 20:59:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:37.585 20:59:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:37.585 20:59:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:08:37.585 20:59:51 -- common/autotest_common.sh@861 -- # break 00:08:37.585 20:59:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:37.585 20:59:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:37.585 20:59:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.585 1+0 records in 00:08:37.585 1+0 records out 00:08:37.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787932 s, 5.2 MB/s 00:08:37.585 20:59:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.585 20:59:51 -- common/autotest_common.sh@874 -- # size=4096 00:08:37.585 20:59:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.585 20:59:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:37.585 20:59:51 -- common/autotest_common.sh@877 -- # return 0 00:08:37.585 20:59:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.585 20:59:51 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:37.585 20:59:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:37.585 /dev/nbd11 00:08:37.844 20:59:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:37.844 20:59:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:37.844 20:59:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:08:37.844 20:59:51 -- common/autotest_common.sh@857 -- # local i 00:08:37.844 20:59:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:37.844 20:59:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:37.844 20:59:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:08:37.844 20:59:51 -- common/autotest_common.sh@861 -- # break 00:08:37.844 20:59:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:37.844 20:59:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:37.844 20:59:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.844 1+0 records in 00:08:37.844 1+0 records out 00:08:37.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680766 s, 6.0 MB/s 00:08:37.844 20:59:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.844 20:59:51 -- common/autotest_common.sh@874 -- # size=4096 00:08:37.844 20:59:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.844 20:59:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:37.844 20:59:51 -- common/autotest_common.sh@877 -- # return 0 00:08:37.844 20:59:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.844 20:59:51 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:37.844 20:59:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:37.844 /dev/nbd12 00:08:38.103 20:59:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:38.103 20:59:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:38.103 20:59:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:08:38.103 20:59:51 -- common/autotest_common.sh@857 -- # local i 00:08:38.103 20:59:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:38.103 20:59:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:38.103 20:59:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:08:38.103 20:59:51 -- common/autotest_common.sh@861 -- # break 00:08:38.103 20:59:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:38.103 20:59:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:38.103 20:59:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.103 1+0 records in 00:08:38.103 1+0 records out 00:08:38.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666411 s, 6.1 MB/s 00:08:38.103 20:59:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.103 20:59:51 -- common/autotest_common.sh@874 -- # size=4096 00:08:38.103 20:59:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.103 20:59:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:38.103 20:59:51 -- common/autotest_common.sh@877 -- # return 0 00:08:38.103 20:59:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.103 20:59:51 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:38.103 20:59:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:38.361 /dev/nbd13 00:08:38.361 20:59:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:38.361 20:59:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:38.361 20:59:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:08:38.361 20:59:52 -- common/autotest_common.sh@857 -- # local i 00:08:38.361 20:59:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:38.361 20:59:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:38.361 20:59:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:08:38.361 20:59:52 -- common/autotest_common.sh@861 -- # break 00:08:38.361 20:59:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:38.361 20:59:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:38.361 20:59:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.361 1+0 records in 00:08:38.361 1+0 records out 00:08:38.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635861 s, 6.4 MB/s 00:08:38.361 20:59:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.361 20:59:52 -- common/autotest_common.sh@874 -- # size=4096 00:08:38.361 20:59:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.361 20:59:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:38.361 20:59:52 -- common/autotest_common.sh@877 -- # return 0 00:08:38.361 20:59:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.361 20:59:52 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:38.361 20:59:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:38.361 20:59:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.361 20:59:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.620 20:59:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:38.621 { 00:08:38.621 "nbd_device": "/dev/nbd0", 00:08:38.621 "bdev_name": "Nvme0n1" 00:08:38.621 }, 00:08:38.621 { 00:08:38.621 "nbd_device": "/dev/nbd1", 00:08:38.621 "bdev_name": "Nvme1n1" 00:08:38.621 }, 00:08:38.621 { 00:08:38.621 "nbd_device": "/dev/nbd10", 00:08:38.621 "bdev_name": "Nvme2n1" 00:08:38.621 }, 00:08:38.621 { 00:08:38.621 "nbd_device": "/dev/nbd11", 00:08:38.621 "bdev_name": "Nvme2n2" 00:08:38.621 }, 00:08:38.621 { 00:08:38.621 "nbd_device": "/dev/nbd12", 00:08:38.621 "bdev_name": "Nvme2n3" 00:08:38.621 }, 00:08:38.621 { 00:08:38.621 "nbd_device": "/dev/nbd13", 00:08:38.621 "bdev_name": "Nvme3n1" 00:08:38.621 } 00:08:38.621 ]' 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:38.621 { 00:08:38.621 "nbd_device": "/dev/nbd0", 00:08:38.621 "bdev_name": "Nvme0n1" 00:08:38.621 }, 00:08:38.621 { 00:08:38.621 "nbd_device": "/dev/nbd1", 00:08:38.621 "bdev_name": "Nvme1n1" 00:08:38.621 }, 00:08:38.621 { 00:08:38.621 "nbd_device": "/dev/nbd10", 00:08:38.621 "bdev_name": "Nvme2n1" 00:08:38.621 }, 00:08:38.621 { 00:08:38.621 "nbd_device": "/dev/nbd11", 00:08:38.621 "bdev_name": "Nvme2n2" 00:08:38.621 }, 00:08:38.621 { 00:08:38.621 "nbd_device": "/dev/nbd12", 00:08:38.621 "bdev_name": "Nvme2n3" 00:08:38.621 }, 00:08:38.621 { 00:08:38.621 "nbd_device": "/dev/nbd13", 00:08:38.621 "bdev_name": "Nvme3n1" 00:08:38.621 } 00:08:38.621 ]' 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:38.621 /dev/nbd1 00:08:38.621 /dev/nbd10 00:08:38.621 /dev/nbd11 00:08:38.621 /dev/nbd12 00:08:38.621 /dev/nbd13' 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:38.621 /dev/nbd1 00:08:38.621 /dev/nbd10 00:08:38.621 /dev/nbd11 00:08:38.621 /dev/nbd12 00:08:38.621 /dev/nbd13' 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@65 -- # count=6 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@66 -- # echo 6 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@95 -- # count=6 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:38.621 256+0 records in 00:08:38.621 256+0 records out 00:08:38.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104432 s, 100 MB/s 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.621 20:59:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:38.881 256+0 records in 00:08:38.881 256+0 records out 00:08:38.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169992 s, 6.2 MB/s 00:08:38.881 20:59:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.881 20:59:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:39.140 256+0 records in 00:08:39.140 256+0 records out 00:08:39.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.199658 s, 5.3 MB/s 00:08:39.140 20:59:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.140 20:59:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:39.140 256+0 records in 00:08:39.140 256+0 records out 00:08:39.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163268 s, 6.4 MB/s 00:08:39.140 20:59:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.140 20:59:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:39.399 256+0 records in 00:08:39.399 256+0 records out 00:08:39.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158045 s, 6.6 MB/s 00:08:39.399 20:59:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.399 20:59:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:39.659 256+0 records in 00:08:39.659 256+0 records out 00:08:39.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174863 s, 6.0 MB/s 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:39.659 256+0 records in 00:08:39.659 256+0 records out 00:08:39.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170613 s, 6.1 MB/s 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.659 20:59:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:39.917 20:59:53 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:39.917 20:59:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:39.917 20:59:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.917 20:59:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:39.917 20:59:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:39.917 20:59:53 -- bdev/nbd_common.sh@51 -- # local i 00:08:39.917 20:59:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.917 20:59:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:40.176 20:59:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:40.176 20:59:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:40.176 20:59:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:40.176 20:59:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.176 20:59:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.176 20:59:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:40.176 20:59:53 -- bdev/nbd_common.sh@41 -- # break 00:08:40.176 20:59:53 -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.176 20:59:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.176 20:59:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:40.176 20:59:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:40.176 20:59:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@41 -- # break 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@41 -- # break 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.435 20:59:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:40.694 20:59:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:40.694 20:59:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:40.694 20:59:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:40.694 20:59:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.694 20:59:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.694 20:59:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:40.694 20:59:54 -- bdev/nbd_common.sh@41 -- # break 00:08:40.694 20:59:54 -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.694 20:59:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.694 20:59:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:40.954 20:59:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:40.954 20:59:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:40.954 20:59:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:40.954 20:59:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.954 20:59:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.954 20:59:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:40.954 20:59:54 -- bdev/nbd_common.sh@41 -- # break 00:08:40.954 20:59:54 -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.954 20:59:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.954 20:59:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:41.213 20:59:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:41.213 20:59:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:41.213 20:59:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:41.213 20:59:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.213 20:59:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.213 20:59:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:41.213 20:59:55 -- bdev/nbd_common.sh@41 -- # break 00:08:41.213 20:59:55 -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.213 20:59:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:41.213 20:59:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.213 20:59:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:41.472 20:59:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:41.472 20:59:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@65 -- # true 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@65 -- # count=0 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@104 -- # count=0 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@109 -- # return 0 00:08:41.473 20:59:55 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:41.473 20:59:55 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:41.732 malloc_lvol_verify 00:08:41.990 20:59:55 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:42.249 2a3ff358-ee1e-4ca7-b116-54dc59904ea1 00:08:42.250 20:59:55 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:42.250 0cab173d-9650-4364-9c4a-7ba1a125676b 00:08:42.250 20:59:56 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:42.508 /dev/nbd0 00:08:42.508 20:59:56 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:42.508 mke2fs 1.46.5 (30-Dec-2021) 00:08:42.508 Discarding device blocks: 0/4096 done 00:08:42.508 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:42.508 00:08:42.508 Allocating group tables: 0/1 done 00:08:42.508 Writing inode tables: 0/1 done 00:08:42.508 Creating journal (1024 blocks): done 00:08:42.508 Writing superblocks and filesystem accounting information: 0/1 done 00:08:42.508 00:08:42.508 20:59:56 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:42.508 20:59:56 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:42.508 20:59:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.508 20:59:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:42.508 20:59:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:42.508 20:59:56 -- bdev/nbd_common.sh@51 -- # local i 00:08:42.508 20:59:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.508 20:59:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:42.767 20:59:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:42.767 20:59:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:42.767 20:59:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:42.767 20:59:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.767 20:59:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.767 20:59:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:42.767 20:59:56 -- bdev/nbd_common.sh@41 -- # break 00:08:42.767 20:59:56 -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.767 20:59:56 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:42.767 20:59:56 -- bdev/nbd_common.sh@147 -- # return 0 00:08:42.767 20:59:56 -- bdev/blockdev.sh@324 -- # killprocess 60958 00:08:42.767 20:59:56 -- common/autotest_common.sh@926 -- # '[' -z 60958 ']' 00:08:42.767 20:59:56 -- common/autotest_common.sh@930 -- # kill -0 60958 00:08:42.767 20:59:56 -- common/autotest_common.sh@931 -- # uname 00:08:42.767 20:59:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:42.768 20:59:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60958 00:08:43.027 killing process with pid 60958 00:08:43.027 20:59:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:43.027 20:59:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:43.027 20:59:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60958' 00:08:43.027 20:59:56 -- common/autotest_common.sh@945 -- # kill 60958 00:08:43.027 20:59:56 -- common/autotest_common.sh@950 -- # wait 60958 00:08:43.964 ************************************ 00:08:43.964 END TEST bdev_nbd 00:08:43.964 ************************************ 00:08:43.964 20:59:57 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:43.964 00:08:43.964 real 0m12.521s 00:08:43.964 user 0m17.597s 00:08:43.964 sys 0m3.768s 00:08:43.964 20:59:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.964 20:59:57 -- common/autotest_common.sh@10 -- # set +x 00:08:43.964 20:59:57 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:43.964 20:59:57 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:08:43.964 skipping fio tests on NVMe due to multi-ns failures. 00:08:43.964 20:59:57 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:43.964 20:59:57 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:43.964 20:59:57 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:43.964 20:59:57 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:08:43.964 20:59:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.964 20:59:57 -- common/autotest_common.sh@10 -- # set +x 00:08:43.964 ************************************ 00:08:43.964 START TEST bdev_verify 00:08:43.964 ************************************ 00:08:43.964 20:59:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:43.964 [2024-07-13 20:59:57.785804] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:43.964 [2024-07-13 20:59:57.785981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61357 ] 00:08:44.222 [2024-07-13 20:59:57.944818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:44.222 [2024-07-13 20:59:58.119992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.222 [2024-07-13 20:59:58.120007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.158 Running I/O for 5 seconds... 00:08:50.443 00:08:50.443 Latency(us) 00:08:50.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.443 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:50.443 Verification LBA range: start 0x0 length 0xbd0bd 00:08:50.443 Nvme0n1 : 5.04 2879.33 11.25 0.00 0.00 44328.16 6374.87 49807.36 00:08:50.443 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:50.443 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:50.443 Nvme0n1 : 5.04 2877.70 11.24 0.00 0.00 44344.41 8638.84 56241.80 00:08:50.443 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:50.443 Verification LBA range: start 0x0 length 0xa0000 00:08:50.443 Nvme1n1 : 5.04 2878.24 11.24 0.00 0.00 44304.91 7030.23 47662.55 00:08:50.443 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:50.443 Verification LBA range: start 0xa0000 length 0xa0000 00:08:50.443 Nvme1n1 : 5.04 2876.75 11.24 0.00 0.00 44315.81 8936.73 53858.68 00:08:50.443 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:50.443 Verification LBA range: start 0x0 length 0x80000 00:08:50.443 Nvme2n1 : 5.05 2883.25 11.26 0.00 0.00 44220.76 3649.16 44802.79 00:08:50.443 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:50.443 Verification LBA range: start 0x80000 length 0x80000 00:08:50.443 Nvme2n1 : 5.05 2882.03 11.26 0.00 0.00 44154.68 3127.85 42419.67 00:08:50.443 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:50.443 Verification LBA range: start 0x0 length 0x80000 00:08:50.443 Nvme2n2 : 5.05 2882.36 11.26 0.00 0.00 44150.24 4379.00 42419.67 00:08:50.443 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:50.443 Verification LBA range: start 0x80000 length 0x80000 00:08:50.443 Nvme2n2 : 5.06 2886.00 11.27 0.00 0.00 44047.83 3515.11 43134.60 00:08:50.443 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:50.443 Verification LBA range: start 0x0 length 0x80000 00:08:50.443 Nvme2n3 : 5.05 2881.15 11.25 0.00 0.00 44128.08 5749.29 42896.29 00:08:50.443 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:50.443 Verification LBA range: start 0x80000 length 0x80000 00:08:50.443 Nvme2n3 : 5.06 2885.29 11.27 0.00 0.00 44009.43 3798.11 43372.92 00:08:50.443 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:50.443 Verification LBA range: start 0x0 length 0x20000 00:08:50.443 Nvme3n1 : 5.05 2880.05 11.25 0.00 0.00 44103.87 6464.23 42896.29 00:08:50.443 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:50.443 Verification LBA range: start 0x20000 length 0x20000 00:08:50.443 Nvme3n1 : 5.06 2884.60 11.27 0.00 0.00 43980.75 4110.89 43372.92 00:08:50.443 =================================================================================================================== 00:08:50.443 Total : 34576.77 135.07 0.00 0.00 44173.83 3127.85 56241.80 00:08:57.030 00:08:57.030 real 0m12.158s 00:08:57.030 user 0m23.074s 00:08:57.030 sys 0m0.269s 00:08:57.030 21:00:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.030 21:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:57.030 ************************************ 00:08:57.030 END TEST bdev_verify 00:08:57.030 ************************************ 00:08:57.030 21:00:09 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:57.030 21:00:09 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:08:57.030 21:00:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.030 21:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:57.030 ************************************ 00:08:57.030 START TEST bdev_verify_big_io 00:08:57.030 ************************************ 00:08:57.030 21:00:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:57.030 [2024-07-13 21:00:10.019757] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:57.030 [2024-07-13 21:00:10.019932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61489 ] 00:08:57.030 [2024-07-13 21:00:10.189500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:57.030 [2024-07-13 21:00:10.358821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.030 [2024-07-13 21:00:10.358858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.287 Running I/O for 5 seconds... 00:09:03.859 00:09:03.859 Latency(us) 00:09:03.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.859 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.859 Verification LBA range: start 0x0 length 0xbd0b 00:09:03.859 Nvme0n1 : 5.39 232.41 14.53 0.00 0.00 534531.07 100567.97 716844.68 00:09:03.859 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.859 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:03.859 Nvme0n1 : 5.39 232.50 14.53 0.00 0.00 534327.19 100567.97 697779.67 00:09:03.859 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.859 Verification LBA range: start 0x0 length 0xa000 00:09:03.859 Nvme1n1 : 5.39 232.31 14.52 0.00 0.00 526189.56 101044.60 671088.64 00:09:03.859 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.859 Verification LBA range: start 0xa000 length 0xa000 00:09:03.859 Nvme1n1 : 5.39 232.38 14.52 0.00 0.00 527141.67 101997.85 640584.61 00:09:03.859 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.859 Verification LBA range: start 0x0 length 0x8000 00:09:03.859 Nvme2n1 : 5.43 238.83 14.93 0.00 0.00 508620.55 33602.09 598641.57 00:09:03.859 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.859 Verification LBA range: start 0x8000 length 0x8000 00:09:03.859 Nvme2n1 : 5.42 239.14 14.95 0.00 0.00 510103.25 28240.06 583389.56 00:09:03.859 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.859 Verification LBA range: start 0x0 length 0x8000 00:09:03.859 Nvme2n2 : 5.44 248.06 15.50 0.00 0.00 486589.35 4974.78 560511.53 00:09:03.859 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.859 Verification LBA range: start 0x8000 length 0x8000 00:09:03.859 Nvme2n2 : 5.43 248.37 15.52 0.00 0.00 488440.83 5064.15 579576.55 00:09:03.859 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.859 Verification LBA range: start 0x0 length 0x8000 00:09:03.859 Nvme2n3 : 5.45 255.37 15.96 0.00 0.00 466225.31 5868.45 564324.54 00:09:03.859 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.859 Verification LBA range: start 0x8000 length 0x8000 00:09:03.859 Nvme2n3 : 5.43 248.26 15.52 0.00 0.00 481276.98 5868.45 583389.56 00:09:03.859 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.859 Verification LBA range: start 0x0 length 0x2000 00:09:03.859 Nvme3n1 : 5.45 255.28 15.96 0.00 0.00 459124.16 6374.87 455653.93 00:09:03.859 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.859 Verification LBA range: start 0x2000 length 0x2000 00:09:03.859 Nvme3n1 : 5.44 255.68 15.98 0.00 0.00 461051.47 3619.37 587202.56 00:09:03.859 =================================================================================================================== 00:09:03.859 Total : 2918.59 182.41 0.00 0.00 497499.70 3619.37 716844.68 00:09:04.425 00:09:04.425 real 0m8.150s 00:09:04.425 user 0m15.012s 00:09:04.425 sys 0m0.282s 00:09:04.425 21:00:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.425 ************************************ 00:09:04.425 END TEST bdev_verify_big_io 00:09:04.425 ************************************ 00:09:04.425 21:00:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.425 21:00:18 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:04.425 21:00:18 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:04.425 21:00:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.425 21:00:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.425 ************************************ 00:09:04.425 START TEST bdev_write_zeroes 00:09:04.425 ************************************ 00:09:04.425 21:00:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:04.425 [2024-07-13 21:00:18.224069] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:04.425 [2024-07-13 21:00:18.224286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61598 ] 00:09:04.684 [2024-07-13 21:00:18.395327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.684 [2024-07-13 21:00:18.574039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.618 Running I/O for 1 seconds... 00:09:06.552 00:09:06.552 Latency(us) 00:09:06.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.552 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:06.552 Nvme0n1 : 1.01 7514.31 29.35 0.00 0.00 16980.81 10068.71 30027.40 00:09:06.552 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:06.552 Nvme1n1 : 1.01 7504.39 29.31 0.00 0.00 16977.10 10783.65 31695.59 00:09:06.552 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:06.552 Nvme2n1 : 1.02 7495.82 29.28 0.00 0.00 16931.40 10247.45 30384.87 00:09:06.552 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:06.552 Nvme2n2 : 1.02 7537.10 29.44 0.00 0.00 16798.54 7923.90 27525.12 00:09:06.552 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:06.552 Nvme2n3 : 1.02 7527.12 29.40 0.00 0.00 16774.94 8102.63 28001.75 00:09:06.552 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:06.552 Nvme3n1 : 1.02 7567.21 29.56 0.00 0.00 16656.24 4944.99 28001.75 00:09:06.552 =================================================================================================================== 00:09:06.552 Total : 45145.96 176.35 0.00 0.00 16852.44 4944.99 31695.59 00:09:07.500 00:09:07.500 real 0m3.163s 00:09:07.500 user 0m2.816s 00:09:07.500 sys 0m0.227s 00:09:07.500 21:00:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.500 21:00:21 -- common/autotest_common.sh@10 -- # set +x 00:09:07.500 ************************************ 00:09:07.500 END TEST bdev_write_zeroes 00:09:07.500 ************************************ 00:09:07.500 21:00:21 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.500 21:00:21 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:07.500 21:00:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.500 21:00:21 -- common/autotest_common.sh@10 -- # set +x 00:09:07.500 ************************************ 00:09:07.500 START TEST bdev_json_nonenclosed 00:09:07.500 ************************************ 00:09:07.500 21:00:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.759 [2024-07-13 21:00:21.439758] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:07.759 [2024-07-13 21:00:21.439957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61651 ] 00:09:07.759 [2024-07-13 21:00:21.606734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.018 [2024-07-13 21:00:21.784721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.019 [2024-07-13 21:00:21.785025] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:08.019 [2024-07-13 21:00:21.785055] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:08.277 00:09:08.277 real 0m0.824s 00:09:08.277 user 0m0.593s 00:09:08.277 sys 0m0.124s 00:09:08.277 21:00:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.277 21:00:22 -- common/autotest_common.sh@10 -- # set +x 00:09:08.277 ************************************ 00:09:08.277 END TEST bdev_json_nonenclosed 00:09:08.278 ************************************ 00:09:08.544 21:00:22 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:08.544 21:00:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:08.544 21:00:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.544 21:00:22 -- common/autotest_common.sh@10 -- # set +x 00:09:08.544 ************************************ 00:09:08.544 START TEST bdev_json_nonarray 00:09:08.544 ************************************ 00:09:08.544 21:00:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:08.544 [2024-07-13 21:00:22.317754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:08.544 [2024-07-13 21:00:22.317969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61682 ] 00:09:08.827 [2024-07-13 21:00:22.489044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.827 [2024-07-13 21:00:22.665637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.827 [2024-07-13 21:00:22.665895] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:08.827 [2024-07-13 21:00:22.665926] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:09.404 00:09:09.404 real 0m0.817s 00:09:09.404 user 0m0.585s 00:09:09.404 sys 0m0.126s 00:09:09.404 21:00:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.404 21:00:23 -- common/autotest_common.sh@10 -- # set +x 00:09:09.404 ************************************ 00:09:09.404 END TEST bdev_json_nonarray 00:09:09.404 ************************************ 00:09:09.404 21:00:23 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:09:09.404 21:00:23 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:09:09.404 21:00:23 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:09:09.404 21:00:23 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:09.404 21:00:23 -- bdev/blockdev.sh@809 -- # cleanup 00:09:09.404 21:00:23 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:09.404 21:00:23 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:09.404 21:00:23 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:09:09.404 21:00:23 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:09:09.404 21:00:23 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:09:09.404 21:00:23 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:09:09.404 00:09:09.404 real 0m48.309s 00:09:09.404 user 1m15.688s 00:09:09.404 sys 0m6.255s 00:09:09.404 21:00:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.404 21:00:23 -- common/autotest_common.sh@10 -- # set +x 00:09:09.404 ************************************ 00:09:09.404 END TEST blockdev_nvme 00:09:09.404 ************************************ 00:09:09.404 21:00:23 -- spdk/autotest.sh@219 -- # uname -s 00:09:09.404 21:00:23 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:09:09.404 21:00:23 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:09.404 21:00:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:09.404 21:00:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.404 21:00:23 -- common/autotest_common.sh@10 -- # set +x 00:09:09.404 ************************************ 00:09:09.404 START TEST blockdev_nvme_gpt 00:09:09.404 ************************************ 00:09:09.404 21:00:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:09.404 * Looking for test storage... 00:09:09.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:09.404 21:00:23 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:09.404 21:00:23 -- bdev/nbd_common.sh@6 -- # set -e 00:09:09.404 21:00:23 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:09.404 21:00:23 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:09.404 21:00:23 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:09.404 21:00:23 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:09.404 21:00:23 -- bdev/blockdev.sh@18 -- # : 00:09:09.404 21:00:23 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:09:09.404 21:00:23 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:09:09.404 21:00:23 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:09:09.404 21:00:23 -- bdev/blockdev.sh@672 -- # uname -s 00:09:09.404 21:00:23 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:09:09.404 21:00:23 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:09:09.404 21:00:23 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:09:09.404 21:00:23 -- bdev/blockdev.sh@681 -- # crypto_device= 00:09:09.404 21:00:23 -- bdev/blockdev.sh@682 -- # dek= 00:09:09.404 21:00:23 -- bdev/blockdev.sh@683 -- # env_ctx= 00:09:09.404 21:00:23 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:09:09.404 21:00:23 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:09:09.404 21:00:23 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:09:09.404 21:00:23 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:09:09.404 21:00:23 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:09:09.404 21:00:23 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61757 00:09:09.404 21:00:23 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:09.404 21:00:23 -- bdev/blockdev.sh@47 -- # waitforlisten 61757 00:09:09.404 21:00:23 -- common/autotest_common.sh@819 -- # '[' -z 61757 ']' 00:09:09.404 21:00:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.404 21:00:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:09.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.404 21:00:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.404 21:00:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:09.404 21:00:23 -- common/autotest_common.sh@10 -- # set +x 00:09:09.404 21:00:23 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:09.675 [2024-07-13 21:00:23.343131] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:09.675 [2024-07-13 21:00:23.343795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61757 ] 00:09:09.675 [2024-07-13 21:00:23.511310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.933 [2024-07-13 21:00:23.684264] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:09.933 [2024-07-13 21:00:23.684507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.311 21:00:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:11.311 21:00:24 -- common/autotest_common.sh@852 -- # return 0 00:09:11.311 21:00:24 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:09:11.311 21:00:24 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:09:11.311 21:00:24 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:11.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:11.827 Waiting for block devices as requested 00:09:11.827 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:11.827 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.086 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.086 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:17.381 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:17.381 21:00:30 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:09:17.381 21:00:30 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:09:17.381 21:00:30 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:09:17.381 21:00:30 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:09:17.381 21:00:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:17.381 21:00:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:09:17.381 21:00:30 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:09:17.381 21:00:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:17.381 21:00:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:09:17.381 21:00:30 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:09:17.381 21:00:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:17.381 21:00:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:09:17.381 21:00:30 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:09:17.381 21:00:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:17.381 21:00:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:09:17.381 21:00:30 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:09:17.381 21:00:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:17.381 21:00:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:09:17.381 21:00:30 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:09:17.381 21:00:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:17.381 21:00:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:09:17.381 21:00:30 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:09:17.381 21:00:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:17.381 21:00:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:09:17.381 21:00:30 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:09:17.381 21:00:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:17.381 21:00:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:17.381 21:00:30 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:09:17.381 21:00:30 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:09:17.381 21:00:30 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:09:17.381 21:00:30 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:17.381 21:00:30 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:09:17.381 21:00:30 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:09:17.381 21:00:30 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:09:17.381 21:00:30 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:09:17.381 BYT; 00:09:17.381 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:17.381 21:00:30 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:09:17.381 BYT; 00:09:17.381 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:17.381 21:00:30 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:09:17.381 21:00:30 -- bdev/blockdev.sh@114 -- # break 00:09:17.381 21:00:30 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:09:17.381 21:00:30 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:17.381 21:00:30 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:17.381 21:00:30 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:17.381 21:00:30 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:09:17.381 21:00:30 -- scripts/common.sh@410 -- # local spdk_guid 00:09:17.381 21:00:30 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:17.381 21:00:30 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:17.381 21:00:30 -- scripts/common.sh@415 -- # IFS='()' 00:09:17.381 21:00:30 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:09:17.381 21:00:30 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:17.381 21:00:30 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:17.381 21:00:30 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:17.381 21:00:30 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:17.381 21:00:30 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:17.381 21:00:30 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:09:17.381 21:00:30 -- scripts/common.sh@422 -- # local spdk_guid 00:09:17.381 21:00:30 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:17.381 21:00:30 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:17.381 21:00:30 -- scripts/common.sh@427 -- # IFS='()' 00:09:17.381 21:00:30 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:09:17.381 21:00:30 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:17.381 21:00:30 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:17.381 21:00:30 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:17.381 21:00:30 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:17.381 21:00:30 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:17.381 21:00:30 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:09:18.318 The operation has completed successfully. 00:09:18.318 21:00:32 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:09:19.254 The operation has completed successfully. 00:09:19.254 21:00:33 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:20.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:20.458 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.458 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.458 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.458 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.458 21:00:34 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:09:20.458 21:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.458 21:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.458 [] 00:09:20.458 21:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.458 21:00:34 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:09:20.458 21:00:34 -- bdev/blockdev.sh@79 -- # local json 00:09:20.458 21:00:34 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:09:20.458 21:00:34 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:20.717 21:00:34 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:09:20.717 21:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.717 21:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.976 21:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.977 21:00:34 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:20.977 21:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.977 21:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.977 21:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.977 21:00:34 -- bdev/blockdev.sh@738 -- # cat 00:09:20.977 21:00:34 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:20.977 21:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.977 21:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.977 21:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.977 21:00:34 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:20.977 21:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.977 21:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.977 21:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.977 21:00:34 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:20.977 21:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.977 21:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.977 21:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.977 21:00:34 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:20.977 21:00:34 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:20.977 21:00:34 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:20.977 21:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.977 21:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.977 21:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.977 21:00:34 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:20.977 21:00:34 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:20.977 21:00:34 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d4b06228-9960-4039-9211-f80362469a91"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d4b06228-9960-4039-9211-f80362469a91",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "abcf59c3-5c8a-4ebe-a0aa-db1f3a90a5f8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "abcf59c3-5c8a-4ebe-a0aa-db1f3a90a5f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "6e25e16c-0485-4bfd-b46e-adf335cd25a0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6e25e16c-0485-4bfd-b46e-adf335cd25a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d14955ba-2f5a-4679-a985-a3df30f10657"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d14955ba-2f5a-4679-a985-a3df30f10657",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c20ad60a-94be-4c67-8893-537e943f20cc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c20ad60a-94be-4c67-8893-537e943f20cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:20.977 21:00:34 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:20.977 21:00:34 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:09:20.977 21:00:34 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:20.977 21:00:34 -- bdev/blockdev.sh@752 -- # killprocess 61757 00:09:20.977 21:00:34 -- common/autotest_common.sh@926 -- # '[' -z 61757 ']' 00:09:20.977 21:00:34 -- common/autotest_common.sh@930 -- # kill -0 61757 00:09:20.977 21:00:34 -- common/autotest_common.sh@931 -- # uname 00:09:20.977 21:00:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:20.977 21:00:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61757 00:09:21.236 killing process with pid 61757 00:09:21.236 21:00:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:21.236 21:00:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:21.236 21:00:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61757' 00:09:21.236 21:00:34 -- common/autotest_common.sh@945 -- # kill 61757 00:09:21.236 21:00:34 -- common/autotest_common.sh@950 -- # wait 61757 00:09:23.141 21:00:36 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:23.141 21:00:36 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:23.141 21:00:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:23.141 21:00:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:23.141 21:00:36 -- common/autotest_common.sh@10 -- # set +x 00:09:23.141 ************************************ 00:09:23.141 START TEST bdev_hello_world 00:09:23.141 ************************************ 00:09:23.141 21:00:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:23.141 [2024-07-13 21:00:36.777787] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:23.141 [2024-07-13 21:00:36.778254] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62419 ] 00:09:23.141 [2024-07-13 21:00:36.945916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.400 [2024-07-13 21:00:37.112725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.968 [2024-07-13 21:00:37.666006] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:23.968 [2024-07-13 21:00:37.666061] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:23.968 [2024-07-13 21:00:37.666102] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:23.968 [2024-07-13 21:00:37.668808] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:23.968 [2024-07-13 21:00:37.669380] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:23.968 [2024-07-13 21:00:37.669422] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:23.968 [2024-07-13 21:00:37.669595] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:23.968 00:09:23.968 [2024-07-13 21:00:37.669626] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:24.906 00:09:24.906 real 0m1.931s 00:09:24.906 user 0m1.626s 00:09:24.906 sys 0m0.197s 00:09:24.906 ************************************ 00:09:24.906 END TEST bdev_hello_world 00:09:24.906 ************************************ 00:09:24.906 21:00:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.906 21:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.906 21:00:38 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:24.906 21:00:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:24.906 21:00:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.906 21:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.906 ************************************ 00:09:24.906 START TEST bdev_bounds 00:09:24.906 ************************************ 00:09:24.906 21:00:38 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:09:24.906 Process bdevio pid: 62461 00:09:24.906 21:00:38 -- bdev/blockdev.sh@288 -- # bdevio_pid=62461 00:09:24.906 21:00:38 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:24.906 21:00:38 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:24.906 21:00:38 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 62461' 00:09:24.906 21:00:38 -- bdev/blockdev.sh@291 -- # waitforlisten 62461 00:09:24.906 21:00:38 -- common/autotest_common.sh@819 -- # '[' -z 62461 ']' 00:09:24.906 21:00:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.906 21:00:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:24.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.906 21:00:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.906 21:00:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:24.906 21:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.906 [2024-07-13 21:00:38.763829] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:24.906 [2024-07-13 21:00:38.764024] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62461 ] 00:09:25.165 [2024-07-13 21:00:38.932653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:25.424 [2024-07-13 21:00:39.091405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.424 [2024-07-13 21:00:39.091495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.424 [2024-07-13 21:00:39.091511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.802 21:00:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:26.802 21:00:40 -- common/autotest_common.sh@852 -- # return 0 00:09:26.802 21:00:40 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:26.802 I/O targets: 00:09:26.802 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:26.802 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:26.802 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:26.802 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:26.802 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:26.802 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:26.802 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:26.802 00:09:26.802 00:09:26.802 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.802 http://cunit.sourceforge.net/ 00:09:26.802 00:09:26.802 00:09:26.802 Suite: bdevio tests on: Nvme3n1 00:09:26.802 Test: blockdev write read block ...passed 00:09:26.802 Test: blockdev write zeroes read block ...passed 00:09:26.802 Test: blockdev write zeroes read no split ...passed 00:09:26.802 Test: blockdev write zeroes read split ...passed 00:09:26.802 Test: blockdev write zeroes read split partial ...passed 00:09:26.802 Test: blockdev reset ...[2024-07-13 21:00:40.595864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:26.802 [2024-07-13 21:00:40.599778] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:26.802 passed 00:09:26.802 Test: blockdev write read 8 blocks ...passed 00:09:26.802 Test: blockdev write read size > 128k ...passed 00:09:26.802 Test: blockdev write read invalid size ...passed 00:09:26.802 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:26.802 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:26.802 Test: blockdev write read max offset ...passed 00:09:26.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:26.802 Test: blockdev writev readv 8 blocks ...passed 00:09:26.802 Test: blockdev writev readv 30 x 1block ...passed 00:09:26.802 Test: blockdev writev readv block ...passed 00:09:26.802 Test: blockdev writev readv size > 128k ...passed 00:09:26.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:26.802 Test: blockdev comparev and writev ...[2024-07-13 21:00:40.609112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27720a000 len:0x1000 00:09:26.802 [2024-07-13 21:00:40.609192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:26.802 passed 00:09:26.802 Test: blockdev nvme passthru rw ...passed 00:09:26.802 Test: blockdev nvme passthru vendor specific ...passed 00:09:26.802 Test: blockdev nvme admin passthru ...[2024-07-13 21:00:40.610226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:26.802 [2024-07-13 21:00:40.610287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:26.802 passed 00:09:26.802 Test: blockdev copy ...passed 00:09:26.802 Suite: bdevio tests on: Nvme2n3 00:09:26.802 Test: blockdev write read block ...passed 00:09:26.802 Test: blockdev write zeroes read block ...passed 00:09:26.802 Test: blockdev write zeroes read no split ...passed 00:09:26.802 Test: blockdev write zeroes read split ...passed 00:09:26.802 Test: blockdev write zeroes read split partial ...passed 00:09:26.802 Test: blockdev reset ...[2024-07-13 21:00:40.674498] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:26.802 [2024-07-13 21:00:40.678632] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:26.802 passed 00:09:26.802 Test: blockdev write read 8 blocks ...passed 00:09:26.802 Test: blockdev write read size > 128k ...passed 00:09:26.802 Test: blockdev write read invalid size ...passed 00:09:26.802 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:26.802 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:26.802 Test: blockdev write read max offset ...passed 00:09:26.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:26.802 Test: blockdev writev readv 8 blocks ...passed 00:09:26.802 Test: blockdev writev readv 30 x 1block ...passed 00:09:26.802 Test: blockdev writev readv block ...passed 00:09:26.802 Test: blockdev writev readv size > 128k ...passed 00:09:26.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:26.802 Test: blockdev comparev and writev ...[2024-07-13 21:00:40.687951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x255f04000 len:0x1000 00:09:26.802 [2024-07-13 21:00:40.688008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:26.802 passed 00:09:26.802 Test: blockdev nvme passthru rw ...passed 00:09:26.802 Test: blockdev nvme passthru vendor specific ...passed 00:09:26.802 Test: blockdev nvme admin passthru ...[2024-07-13 21:00:40.688968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:26.802 [2024-07-13 21:00:40.689014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:26.802 passed 00:09:26.802 Test: blockdev copy ...passed 00:09:26.802 Suite: bdevio tests on: Nvme2n2 00:09:26.802 Test: blockdev write read block ...passed 00:09:26.802 Test: blockdev write zeroes read block ...passed 00:09:26.802 Test: blockdev write zeroes read no split ...passed 00:09:27.062 Test: blockdev write zeroes read split ...passed 00:09:27.062 Test: blockdev write zeroes read split partial ...passed 00:09:27.062 Test: blockdev reset ...[2024-07-13 21:00:40.751222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:27.062 [2024-07-13 21:00:40.755009] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.062 passed 00:09:27.062 Test: blockdev write read 8 blocks ...passed 00:09:27.062 Test: blockdev write read size > 128k ...passed 00:09:27.062 Test: blockdev write read invalid size ...passed 00:09:27.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.062 Test: blockdev write read max offset ...passed 00:09:27.062 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.062 Test: blockdev writev readv 8 blocks ...passed 00:09:27.062 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.062 Test: blockdev writev readv block ...passed 00:09:27.062 Test: blockdev writev readv size > 128k ...passed 00:09:27.062 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.062 Test: blockdev comparev and writev ...[2024-07-13 21:00:40.764063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x255f04000 len:0x1000 00:09:27.062 [2024-07-13 21:00:40.764121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:27.062 passed 00:09:27.062 Test: blockdev nvme passthru rw ...passed 00:09:27.062 Test: blockdev nvme passthru vendor specific ...passed 00:09:27.062 Test: blockdev nvme admin passthru ...[2024-07-13 21:00:40.765042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:27.062 [2024-07-13 21:00:40.765086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:27.062 passed 00:09:27.062 Test: blockdev copy ...passed 00:09:27.062 Suite: bdevio tests on: Nvme2n1 00:09:27.062 Test: blockdev write read block ...passed 00:09:27.062 Test: blockdev write zeroes read block ...passed 00:09:27.062 Test: blockdev write zeroes read no split ...passed 00:09:27.062 Test: blockdev write zeroes read split ...passed 00:09:27.062 Test: blockdev write zeroes read split partial ...passed 00:09:27.062 Test: blockdev reset ...[2024-07-13 21:00:40.825787] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:27.062 [2024-07-13 21:00:40.829644] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.062 passed 00:09:27.062 Test: blockdev write read 8 blocks ...passed 00:09:27.062 Test: blockdev write read size > 128k ...passed 00:09:27.062 Test: blockdev write read invalid size ...passed 00:09:27.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.062 Test: blockdev write read max offset ...passed 00:09:27.062 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.062 Test: blockdev writev readv 8 blocks ...passed 00:09:27.062 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.062 Test: blockdev writev readv block ...passed 00:09:27.062 Test: blockdev writev readv size > 128k ...passed 00:09:27.062 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.062 Test: blockdev comparev and writev ...[2024-07-13 21:00:40.839135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29383c000 len:0x1000 00:09:27.062 [2024-07-13 21:00:40.839202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:27.062 passed 00:09:27.062 Test: blockdev nvme passthru rw ...passed 00:09:27.062 Test: blockdev nvme passthru vendor specific ...[2024-07-13 21:00:40.840162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:27.062 [2024-07-13 21:00:40.840238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:27.062 passed 00:09:27.062 Test: blockdev nvme admin passthru ...passed 00:09:27.062 Test: blockdev copy ...passed 00:09:27.062 Suite: bdevio tests on: Nvme1n1 00:09:27.062 Test: blockdev write read block ...passed 00:09:27.062 Test: blockdev write zeroes read block ...passed 00:09:27.062 Test: blockdev write zeroes read no split ...passed 00:09:27.062 Test: blockdev write zeroes read split ...passed 00:09:27.062 Test: blockdev write zeroes read split partial ...passed 00:09:27.062 Test: blockdev reset ...[2024-07-13 21:00:40.901240] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:27.062 [2024-07-13 21:00:40.904761] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.062 passed 00:09:27.062 Test: blockdev write read 8 blocks ...passed 00:09:27.062 Test: blockdev write read size > 128k ...passed 00:09:27.062 Test: blockdev write read invalid size ...passed 00:09:27.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.062 Test: blockdev write read max offset ...passed 00:09:27.062 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.062 Test: blockdev writev readv 8 blocks ...passed 00:09:27.062 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.062 Test: blockdev writev readv block ...passed 00:09:27.062 Test: blockdev writev readv size > 128k ...passed 00:09:27.062 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.062 Test: blockdev comparev and writev ...[2024-07-13 21:00:40.914086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x293838000 len:0x1000 00:09:27.062 [2024-07-13 21:00:40.914142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:27.062 passed 00:09:27.062 Test: blockdev nvme passthru rw ...passed 00:09:27.062 Test: blockdev nvme passthru vendor specific ...passed 00:09:27.062 Test: blockdev nvme admin passthru ...[2024-07-13 21:00:40.915156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:27.062 [2024-07-13 21:00:40.915202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:27.062 passed 00:09:27.062 Test: blockdev copy ...passed 00:09:27.062 Suite: bdevio tests on: Nvme0n1p2 00:09:27.062 Test: blockdev write read block ...passed 00:09:27.062 Test: blockdev write zeroes read block ...passed 00:09:27.062 Test: blockdev write zeroes read no split ...passed 00:09:27.062 Test: blockdev write zeroes read split ...passed 00:09:27.062 Test: blockdev write zeroes read split partial ...passed 00:09:27.062 Test: blockdev reset ...[2024-07-13 21:00:40.975406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:27.062 [2024-07-13 21:00:40.978837] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.062 passed 00:09:27.062 Test: blockdev write read 8 blocks ...passed 00:09:27.062 Test: blockdev write read size > 128k ...passed 00:09:27.062 Test: blockdev write read invalid size ...passed 00:09:27.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.063 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.063 Test: blockdev write read max offset ...passed 00:09:27.063 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.063 Test: blockdev writev readv 8 blocks ...passed 00:09:27.344 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.344 Test: blockdev writev readv block ...passed 00:09:27.344 Test: blockdev writev readv size > 128k ...passed 00:09:27.344 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.344 Test: blockdev comparev and writev ...[2024-07-13 21:00:40.987943] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:27.344 separate metadata which is not supported yet. 00:09:27.344 passed 00:09:27.344 Test: blockdev nvme passthru rw ...passed 00:09:27.344 Test: blockdev nvme passthru vendor specific ...passed 00:09:27.344 Test: blockdev nvme admin passthru ...passed 00:09:27.344 Test: blockdev copy ...passed 00:09:27.344 Suite: bdevio tests on: Nvme0n1p1 00:09:27.344 Test: blockdev write read block ...passed 00:09:27.344 Test: blockdev write zeroes read block ...passed 00:09:27.344 Test: blockdev write zeroes read no split ...passed 00:09:27.344 Test: blockdev write zeroes read split ...passed 00:09:27.344 Test: blockdev write zeroes read split partial ...passed 00:09:27.344 Test: blockdev reset ...[2024-07-13 21:00:41.041590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:27.344 [2024-07-13 21:00:41.045213] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.344 passed 00:09:27.344 Test: blockdev write read 8 blocks ...passed 00:09:27.344 Test: blockdev write read size > 128k ...passed 00:09:27.344 Test: blockdev write read invalid size ...passed 00:09:27.344 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.344 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.345 Test: blockdev write read max offset ...passed 00:09:27.345 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.345 Test: blockdev writev readv 8 blocks ...passed 00:09:27.345 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.345 Test: blockdev writev readv block ...passed 00:09:27.345 Test: blockdev writev readv size > 128k ...passed 00:09:27.345 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.345 Test: blockdev comparev and writev ...passed 00:09:27.345 Test: blockdev nvme passthru rw ...passed 00:09:27.345 Test: blockdev nvme passthru vendor specific ...passed 00:09:27.345 Test: blockdev nvme admin passthru ...passed 00:09:27.345 Test: blockdev copy ...[2024-07-13 21:00:41.053827] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:27.345 separate metadata which is not supported yet. 00:09:27.345 passed 00:09:27.345 00:09:27.345 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.345 suites 7 7 n/a 0 0 00:09:27.345 tests 161 161 161 0 0 00:09:27.345 asserts 1006 1006 1006 0 n/a 00:09:27.345 00:09:27.345 Elapsed time = 1.393 seconds 00:09:27.345 0 00:09:27.345 21:00:41 -- bdev/blockdev.sh@293 -- # killprocess 62461 00:09:27.345 21:00:41 -- common/autotest_common.sh@926 -- # '[' -z 62461 ']' 00:09:27.345 21:00:41 -- common/autotest_common.sh@930 -- # kill -0 62461 00:09:27.345 21:00:41 -- common/autotest_common.sh@931 -- # uname 00:09:27.345 21:00:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:27.345 21:00:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62461 00:09:27.345 21:00:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:27.345 21:00:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:27.345 21:00:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62461' 00:09:27.345 killing process with pid 62461 00:09:27.345 21:00:41 -- common/autotest_common.sh@945 -- # kill 62461 00:09:27.345 21:00:41 -- common/autotest_common.sh@950 -- # wait 62461 00:09:28.282 21:00:41 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:28.282 00:09:28.282 real 0m3.323s 00:09:28.282 user 0m8.832s 00:09:28.282 sys 0m0.399s 00:09:28.282 21:00:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.282 ************************************ 00:09:28.282 END TEST bdev_bounds 00:09:28.282 ************************************ 00:09:28.282 21:00:41 -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 21:00:42 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:28.282 21:00:42 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:28.282 21:00:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:28.282 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 ************************************ 00:09:28.282 START TEST bdev_nbd 00:09:28.282 ************************************ 00:09:28.282 21:00:42 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:28.282 21:00:42 -- bdev/blockdev.sh@298 -- # uname -s 00:09:28.282 21:00:42 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:28.282 21:00:42 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.282 21:00:42 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:28.282 21:00:42 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:28.282 21:00:42 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:28.282 21:00:42 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:09:28.282 21:00:42 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:28.282 21:00:42 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:28.282 21:00:42 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:28.282 21:00:42 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:09:28.282 21:00:42 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:28.282 21:00:42 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:28.282 21:00:42 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:28.282 21:00:42 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:28.282 21:00:42 -- bdev/blockdev.sh@316 -- # nbd_pid=62528 00:09:28.282 21:00:42 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:28.282 21:00:42 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:28.282 21:00:42 -- bdev/blockdev.sh@318 -- # waitforlisten 62528 /var/tmp/spdk-nbd.sock 00:09:28.282 21:00:42 -- common/autotest_common.sh@819 -- # '[' -z 62528 ']' 00:09:28.282 21:00:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:28.282 21:00:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:28.282 21:00:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:28.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:28.282 21:00:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:28.282 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:09:28.282 [2024-07-13 21:00:42.125591] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:28.282 [2024-07-13 21:00:42.125728] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.541 [2024-07-13 21:00:42.287618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.541 [2024-07-13 21:00:42.458128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.927 21:00:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:29.927 21:00:43 -- common/autotest_common.sh@852 -- # return 0 00:09:29.927 21:00:43 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:29.927 21:00:43 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.927 21:00:43 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:29.927 21:00:43 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:29.927 21:00:43 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:29.927 21:00:43 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.927 21:00:43 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:29.927 21:00:43 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:29.927 21:00:43 -- bdev/nbd_common.sh@24 -- # local i 00:09:29.927 21:00:43 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:29.927 21:00:43 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:29.927 21:00:43 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:29.927 21:00:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:30.186 21:00:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:30.186 21:00:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:30.186 21:00:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:30.186 21:00:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:30.186 21:00:44 -- common/autotest_common.sh@857 -- # local i 00:09:30.186 21:00:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:30.186 21:00:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:30.186 21:00:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:30.186 21:00:44 -- common/autotest_common.sh@861 -- # break 00:09:30.186 21:00:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:30.186 21:00:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:30.186 21:00:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:30.186 1+0 records in 00:09:30.186 1+0 records out 00:09:30.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517703 s, 7.9 MB/s 00:09:30.186 21:00:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.186 21:00:44 -- common/autotest_common.sh@874 -- # size=4096 00:09:30.186 21:00:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.186 21:00:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:30.186 21:00:44 -- common/autotest_common.sh@877 -- # return 0 00:09:30.186 21:00:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:30.186 21:00:44 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:30.186 21:00:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:30.443 21:00:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:30.443 21:00:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:30.702 21:00:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:30.702 21:00:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:30.702 21:00:44 -- common/autotest_common.sh@857 -- # local i 00:09:30.702 21:00:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:30.702 21:00:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:30.702 21:00:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:30.702 21:00:44 -- common/autotest_common.sh@861 -- # break 00:09:30.702 21:00:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:30.702 21:00:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:30.702 21:00:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:30.702 1+0 records in 00:09:30.702 1+0 records out 00:09:30.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528618 s, 7.7 MB/s 00:09:30.702 21:00:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.702 21:00:44 -- common/autotest_common.sh@874 -- # size=4096 00:09:30.702 21:00:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.702 21:00:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:30.702 21:00:44 -- common/autotest_common.sh@877 -- # return 0 00:09:30.702 21:00:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:30.702 21:00:44 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:30.702 21:00:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:30.961 21:00:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:30.961 21:00:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:30.961 21:00:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:30.961 21:00:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:09:30.961 21:00:44 -- common/autotest_common.sh@857 -- # local i 00:09:30.961 21:00:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:30.961 21:00:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:30.961 21:00:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:09:30.961 21:00:44 -- common/autotest_common.sh@861 -- # break 00:09:30.961 21:00:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:30.961 21:00:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:30.961 21:00:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:30.961 1+0 records in 00:09:30.961 1+0 records out 00:09:30.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127041 s, 3.2 MB/s 00:09:30.961 21:00:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.961 21:00:44 -- common/autotest_common.sh@874 -- # size=4096 00:09:30.961 21:00:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.961 21:00:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:30.961 21:00:44 -- common/autotest_common.sh@877 -- # return 0 00:09:30.961 21:00:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:30.961 21:00:44 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:30.961 21:00:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:31.221 21:00:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:31.221 21:00:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:31.221 21:00:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:31.221 21:00:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:09:31.221 21:00:44 -- common/autotest_common.sh@857 -- # local i 00:09:31.221 21:00:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:31.221 21:00:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:31.221 21:00:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:09:31.221 21:00:44 -- common/autotest_common.sh@861 -- # break 00:09:31.221 21:00:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:31.221 21:00:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:31.221 21:00:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.221 1+0 records in 00:09:31.221 1+0 records out 00:09:31.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624474 s, 6.6 MB/s 00:09:31.221 21:00:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.221 21:00:44 -- common/autotest_common.sh@874 -- # size=4096 00:09:31.221 21:00:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.221 21:00:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:31.221 21:00:44 -- common/autotest_common.sh@877 -- # return 0 00:09:31.221 21:00:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:31.221 21:00:44 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:31.221 21:00:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:31.480 21:00:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:31.480 21:00:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:31.480 21:00:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:31.480 21:00:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:09:31.480 21:00:45 -- common/autotest_common.sh@857 -- # local i 00:09:31.480 21:00:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:31.480 21:00:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:31.480 21:00:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:09:31.480 21:00:45 -- common/autotest_common.sh@861 -- # break 00:09:31.480 21:00:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:31.480 21:00:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:31.480 21:00:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.480 1+0 records in 00:09:31.480 1+0 records out 00:09:31.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755837 s, 5.4 MB/s 00:09:31.480 21:00:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.480 21:00:45 -- common/autotest_common.sh@874 -- # size=4096 00:09:31.480 21:00:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.480 21:00:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:31.480 21:00:45 -- common/autotest_common.sh@877 -- # return 0 00:09:31.480 21:00:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:31.480 21:00:45 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:31.480 21:00:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:31.739 21:00:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:31.739 21:00:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:31.739 21:00:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:31.739 21:00:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:09:31.739 21:00:45 -- common/autotest_common.sh@857 -- # local i 00:09:31.739 21:00:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:31.739 21:00:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:31.739 21:00:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:09:31.739 21:00:45 -- common/autotest_common.sh@861 -- # break 00:09:31.739 21:00:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:31.739 21:00:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:31.739 21:00:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.739 1+0 records in 00:09:31.739 1+0 records out 00:09:31.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083066 s, 4.9 MB/s 00:09:31.739 21:00:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.739 21:00:45 -- common/autotest_common.sh@874 -- # size=4096 00:09:31.739 21:00:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.739 21:00:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:31.739 21:00:45 -- common/autotest_common.sh@877 -- # return 0 00:09:31.739 21:00:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:31.739 21:00:45 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:31.739 21:00:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:31.998 21:00:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:31.998 21:00:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:31.998 21:00:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:31.998 21:00:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:09:31.998 21:00:45 -- common/autotest_common.sh@857 -- # local i 00:09:31.998 21:00:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:31.998 21:00:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:31.998 21:00:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:09:31.998 21:00:45 -- common/autotest_common.sh@861 -- # break 00:09:31.998 21:00:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:31.998 21:00:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:31.998 21:00:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.998 1+0 records in 00:09:31.998 1+0 records out 00:09:31.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000853442 s, 4.8 MB/s 00:09:31.998 21:00:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.998 21:00:45 -- common/autotest_common.sh@874 -- # size=4096 00:09:31.998 21:00:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.998 21:00:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:31.998 21:00:45 -- common/autotest_common.sh@877 -- # return 0 00:09:31.998 21:00:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:31.998 21:00:45 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:31.998 21:00:45 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:32.257 21:00:46 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:32.257 { 00:09:32.257 "nbd_device": "/dev/nbd0", 00:09:32.257 "bdev_name": "Nvme0n1p1" 00:09:32.257 }, 00:09:32.257 { 00:09:32.257 "nbd_device": "/dev/nbd1", 00:09:32.257 "bdev_name": "Nvme0n1p2" 00:09:32.257 }, 00:09:32.257 { 00:09:32.257 "nbd_device": "/dev/nbd2", 00:09:32.257 "bdev_name": "Nvme1n1" 00:09:32.257 }, 00:09:32.257 { 00:09:32.257 "nbd_device": "/dev/nbd3", 00:09:32.258 "bdev_name": "Nvme2n1" 00:09:32.258 }, 00:09:32.258 { 00:09:32.258 "nbd_device": "/dev/nbd4", 00:09:32.258 "bdev_name": "Nvme2n2" 00:09:32.258 }, 00:09:32.258 { 00:09:32.258 "nbd_device": "/dev/nbd5", 00:09:32.258 "bdev_name": "Nvme2n3" 00:09:32.258 }, 00:09:32.258 { 00:09:32.258 "nbd_device": "/dev/nbd6", 00:09:32.258 "bdev_name": "Nvme3n1" 00:09:32.258 } 00:09:32.258 ]' 00:09:32.258 21:00:46 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:32.258 21:00:46 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:32.258 21:00:46 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:32.258 { 00:09:32.258 "nbd_device": "/dev/nbd0", 00:09:32.258 "bdev_name": "Nvme0n1p1" 00:09:32.258 }, 00:09:32.258 { 00:09:32.258 "nbd_device": "/dev/nbd1", 00:09:32.258 "bdev_name": "Nvme0n1p2" 00:09:32.258 }, 00:09:32.258 { 00:09:32.258 "nbd_device": "/dev/nbd2", 00:09:32.258 "bdev_name": "Nvme1n1" 00:09:32.258 }, 00:09:32.258 { 00:09:32.258 "nbd_device": "/dev/nbd3", 00:09:32.258 "bdev_name": "Nvme2n1" 00:09:32.258 }, 00:09:32.258 { 00:09:32.258 "nbd_device": "/dev/nbd4", 00:09:32.258 "bdev_name": "Nvme2n2" 00:09:32.258 }, 00:09:32.258 { 00:09:32.258 "nbd_device": "/dev/nbd5", 00:09:32.258 "bdev_name": "Nvme2n3" 00:09:32.258 }, 00:09:32.258 { 00:09:32.258 "nbd_device": "/dev/nbd6", 00:09:32.258 "bdev_name": "Nvme3n1" 00:09:32.258 } 00:09:32.258 ]' 00:09:32.258 21:00:46 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:32.258 21:00:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.258 21:00:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:32.258 21:00:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:32.258 21:00:46 -- bdev/nbd_common.sh@51 -- # local i 00:09:32.258 21:00:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.258 21:00:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:32.517 21:00:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:32.517 21:00:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:32.517 21:00:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:32.517 21:00:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.517 21:00:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.517 21:00:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:32.517 21:00:46 -- bdev/nbd_common.sh@41 -- # break 00:09:32.517 21:00:46 -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.517 21:00:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.517 21:00:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:32.776 21:00:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:32.776 21:00:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:32.776 21:00:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:32.776 21:00:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.776 21:00:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.776 21:00:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:32.776 21:00:46 -- bdev/nbd_common.sh@41 -- # break 00:09:32.776 21:00:46 -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.776 21:00:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.776 21:00:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:33.035 21:00:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:33.035 21:00:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:33.035 21:00:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:33.035 21:00:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.035 21:00:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.035 21:00:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:33.035 21:00:46 -- bdev/nbd_common.sh@41 -- # break 00:09:33.035 21:00:46 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.035 21:00:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.035 21:00:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:33.295 21:00:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:33.295 21:00:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:33.295 21:00:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:33.295 21:00:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.295 21:00:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.295 21:00:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:33.295 21:00:47 -- bdev/nbd_common.sh@41 -- # break 00:09:33.295 21:00:47 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.295 21:00:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.295 21:00:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:33.554 21:00:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:33.554 21:00:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:33.554 21:00:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:33.554 21:00:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.554 21:00:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.554 21:00:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:33.554 21:00:47 -- bdev/nbd_common.sh@41 -- # break 00:09:33.554 21:00:47 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.554 21:00:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.554 21:00:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:33.813 21:00:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:33.813 21:00:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:33.813 21:00:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:33.813 21:00:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.813 21:00:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.813 21:00:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:33.813 21:00:47 -- bdev/nbd_common.sh@41 -- # break 00:09:33.813 21:00:47 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.813 21:00:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.813 21:00:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:34.071 21:00:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:34.071 21:00:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:34.071 21:00:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:34.071 21:00:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.071 21:00:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.071 21:00:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:34.071 21:00:47 -- bdev/nbd_common.sh@41 -- # break 00:09:34.071 21:00:47 -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.071 21:00:47 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:34.071 21:00:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.071 21:00:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.071 21:00:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:34.329 21:00:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:34.329 21:00:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.329 21:00:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:34.329 21:00:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:34.329 21:00:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.329 21:00:48 -- bdev/nbd_common.sh@65 -- # true 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@65 -- # count=0 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@122 -- # count=0 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@127 -- # return 0 00:09:34.330 21:00:48 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@12 -- # local i 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:34.330 21:00:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:34.588 /dev/nbd0 00:09:34.588 21:00:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:34.588 21:00:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:34.588 21:00:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:34.588 21:00:48 -- common/autotest_common.sh@857 -- # local i 00:09:34.588 21:00:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:34.588 21:00:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:34.588 21:00:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:34.588 21:00:48 -- common/autotest_common.sh@861 -- # break 00:09:34.588 21:00:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:34.588 21:00:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:34.588 21:00:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:34.588 1+0 records in 00:09:34.588 1+0 records out 00:09:34.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535057 s, 7.7 MB/s 00:09:34.588 21:00:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:34.588 21:00:48 -- common/autotest_common.sh@874 -- # size=4096 00:09:34.588 21:00:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:34.588 21:00:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:34.588 21:00:48 -- common/autotest_common.sh@877 -- # return 0 00:09:34.588 21:00:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:34.588 21:00:48 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:34.588 21:00:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:34.588 /dev/nbd1 00:09:34.588 21:00:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:34.588 21:00:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:34.588 21:00:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:34.588 21:00:48 -- common/autotest_common.sh@857 -- # local i 00:09:34.588 21:00:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:34.588 21:00:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:34.588 21:00:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:34.588 21:00:48 -- common/autotest_common.sh@861 -- # break 00:09:34.589 21:00:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:34.589 21:00:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:34.589 21:00:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:34.589 1+0 records in 00:09:34.589 1+0 records out 00:09:34.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834831 s, 4.9 MB/s 00:09:34.589 21:00:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:34.847 21:00:48 -- common/autotest_common.sh@874 -- # size=4096 00:09:34.847 21:00:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:34.847 21:00:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:34.847 21:00:48 -- common/autotest_common.sh@877 -- # return 0 00:09:34.847 21:00:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:34.847 21:00:48 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:34.847 21:00:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:35.105 /dev/nbd10 00:09:35.105 21:00:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:35.105 21:00:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:35.105 21:00:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:09:35.105 21:00:48 -- common/autotest_common.sh@857 -- # local i 00:09:35.105 21:00:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:35.105 21:00:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:35.105 21:00:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:09:35.105 21:00:48 -- common/autotest_common.sh@861 -- # break 00:09:35.105 21:00:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:35.105 21:00:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:35.105 21:00:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.105 1+0 records in 00:09:35.105 1+0 records out 00:09:35.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630883 s, 6.5 MB/s 00:09:35.106 21:00:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.106 21:00:48 -- common/autotest_common.sh@874 -- # size=4096 00:09:35.106 21:00:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.106 21:00:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:35.106 21:00:48 -- common/autotest_common.sh@877 -- # return 0 00:09:35.106 21:00:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.106 21:00:48 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:35.106 21:00:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:35.106 /dev/nbd11 00:09:35.106 21:00:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:35.106 21:00:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:35.106 21:00:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:09:35.106 21:00:49 -- common/autotest_common.sh@857 -- # local i 00:09:35.106 21:00:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:35.106 21:00:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:35.106 21:00:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:09:35.365 21:00:49 -- common/autotest_common.sh@861 -- # break 00:09:35.365 21:00:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:35.365 21:00:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:35.365 21:00:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.365 1+0 records in 00:09:35.365 1+0 records out 00:09:35.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138967 s, 2.9 MB/s 00:09:35.365 21:00:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.365 21:00:49 -- common/autotest_common.sh@874 -- # size=4096 00:09:35.365 21:00:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.365 21:00:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:35.365 21:00:49 -- common/autotest_common.sh@877 -- # return 0 00:09:35.365 21:00:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.365 21:00:49 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:35.365 21:00:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:35.624 /dev/nbd12 00:09:35.624 21:00:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:35.624 21:00:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:35.624 21:00:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:09:35.624 21:00:49 -- common/autotest_common.sh@857 -- # local i 00:09:35.624 21:00:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:35.624 21:00:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:35.624 21:00:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:09:35.624 21:00:49 -- common/autotest_common.sh@861 -- # break 00:09:35.624 21:00:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:35.624 21:00:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:35.624 21:00:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.624 1+0 records in 00:09:35.624 1+0 records out 00:09:35.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000809549 s, 5.1 MB/s 00:09:35.624 21:00:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.624 21:00:49 -- common/autotest_common.sh@874 -- # size=4096 00:09:35.624 21:00:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.624 21:00:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:35.624 21:00:49 -- common/autotest_common.sh@877 -- # return 0 00:09:35.624 21:00:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.624 21:00:49 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:35.624 21:00:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:35.624 /dev/nbd13 00:09:35.883 21:00:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:35.883 21:00:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:35.883 21:00:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:09:35.883 21:00:49 -- common/autotest_common.sh@857 -- # local i 00:09:35.883 21:00:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:35.883 21:00:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:35.883 21:00:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:09:35.883 21:00:49 -- common/autotest_common.sh@861 -- # break 00:09:35.883 21:00:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:35.883 21:00:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:35.883 21:00:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.883 1+0 records in 00:09:35.883 1+0 records out 00:09:35.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000993326 s, 4.1 MB/s 00:09:35.883 21:00:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.883 21:00:49 -- common/autotest_common.sh@874 -- # size=4096 00:09:35.883 21:00:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.883 21:00:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:35.883 21:00:49 -- common/autotest_common.sh@877 -- # return 0 00:09:35.883 21:00:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.883 21:00:49 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:35.883 21:00:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:35.883 /dev/nbd14 00:09:35.883 21:00:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:35.883 21:00:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:35.883 21:00:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:09:35.883 21:00:49 -- common/autotest_common.sh@857 -- # local i 00:09:35.883 21:00:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:35.883 21:00:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:35.883 21:00:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:09:35.883 21:00:49 -- common/autotest_common.sh@861 -- # break 00:09:35.883 21:00:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:35.883 21:00:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:35.883 21:00:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.883 1+0 records in 00:09:35.883 1+0 records out 00:09:35.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000984267 s, 4.2 MB/s 00:09:35.884 21:00:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.884 21:00:49 -- common/autotest_common.sh@874 -- # size=4096 00:09:35.884 21:00:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.884 21:00:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:35.884 21:00:49 -- common/autotest_common.sh@877 -- # return 0 00:09:35.884 21:00:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.884 21:00:49 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:35.884 21:00:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:35.884 21:00:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.884 21:00:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:36.143 21:00:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd0", 00:09:36.143 "bdev_name": "Nvme0n1p1" 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd1", 00:09:36.143 "bdev_name": "Nvme0n1p2" 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd10", 00:09:36.143 "bdev_name": "Nvme1n1" 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd11", 00:09:36.143 "bdev_name": "Nvme2n1" 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd12", 00:09:36.143 "bdev_name": "Nvme2n2" 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd13", 00:09:36.143 "bdev_name": "Nvme2n3" 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd14", 00:09:36.143 "bdev_name": "Nvme3n1" 00:09:36.143 } 00:09:36.143 ]' 00:09:36.143 21:00:49 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd0", 00:09:36.143 "bdev_name": "Nvme0n1p1" 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd1", 00:09:36.143 "bdev_name": "Nvme0n1p2" 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd10", 00:09:36.143 "bdev_name": "Nvme1n1" 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd11", 00:09:36.143 "bdev_name": "Nvme2n1" 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd12", 00:09:36.143 "bdev_name": "Nvme2n2" 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd13", 00:09:36.143 "bdev_name": "Nvme2n3" 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "nbd_device": "/dev/nbd14", 00:09:36.143 "bdev_name": "Nvme3n1" 00:09:36.143 } 00:09:36.143 ]' 00:09:36.143 21:00:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:36.143 /dev/nbd1 00:09:36.143 /dev/nbd10 00:09:36.143 /dev/nbd11 00:09:36.143 /dev/nbd12 00:09:36.143 /dev/nbd13 00:09:36.143 /dev/nbd14' 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:36.143 /dev/nbd1 00:09:36.143 /dev/nbd10 00:09:36.143 /dev/nbd11 00:09:36.143 /dev/nbd12 00:09:36.143 /dev/nbd13 00:09:36.143 /dev/nbd14' 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@65 -- # count=7 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@66 -- # echo 7 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@95 -- # count=7 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:36.143 21:00:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:36.402 256+0 records in 00:09:36.402 256+0 records out 00:09:36.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00689446 s, 152 MB/s 00:09:36.402 21:00:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:36.402 21:00:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:36.402 256+0 records in 00:09:36.402 256+0 records out 00:09:36.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173039 s, 6.1 MB/s 00:09:36.402 21:00:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:36.402 21:00:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:36.661 256+0 records in 00:09:36.661 256+0 records out 00:09:36.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.197677 s, 5.3 MB/s 00:09:36.661 21:00:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:36.661 21:00:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:36.925 256+0 records in 00:09:36.925 256+0 records out 00:09:36.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169653 s, 6.2 MB/s 00:09:36.925 21:00:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:36.925 21:00:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:36.925 256+0 records in 00:09:36.925 256+0 records out 00:09:36.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196974 s, 5.3 MB/s 00:09:36.925 21:00:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:36.925 21:00:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:37.184 256+0 records in 00:09:37.184 256+0 records out 00:09:37.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1968 s, 5.3 MB/s 00:09:37.184 21:00:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.184 21:00:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:37.443 256+0 records in 00:09:37.443 256+0 records out 00:09:37.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196648 s, 5.3 MB/s 00:09:37.443 21:00:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.443 21:00:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:37.702 256+0 records in 00:09:37.702 256+0 records out 00:09:37.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.195315 s, 5.4 MB/s 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@51 -- # local i 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:37.702 21:00:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:37.960 21:00:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:37.960 21:00:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:37.960 21:00:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:37.960 21:00:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:37.960 21:00:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:37.960 21:00:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:37.960 21:00:51 -- bdev/nbd_common.sh@41 -- # break 00:09:37.960 21:00:51 -- bdev/nbd_common.sh@45 -- # return 0 00:09:37.960 21:00:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:37.960 21:00:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:38.218 21:00:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:38.218 21:00:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:38.218 21:00:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:38.218 21:00:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.218 21:00:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.218 21:00:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:38.218 21:00:52 -- bdev/nbd_common.sh@41 -- # break 00:09:38.218 21:00:52 -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.218 21:00:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.218 21:00:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:38.476 21:00:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:38.476 21:00:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:38.476 21:00:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:38.476 21:00:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.476 21:00:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.476 21:00:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:38.476 21:00:52 -- bdev/nbd_common.sh@41 -- # break 00:09:38.476 21:00:52 -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.476 21:00:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.476 21:00:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:38.733 21:00:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:38.733 21:00:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:38.733 21:00:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:38.733 21:00:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.733 21:00:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.733 21:00:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:38.733 21:00:52 -- bdev/nbd_common.sh@41 -- # break 00:09:38.733 21:00:52 -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.733 21:00:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.733 21:00:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:38.990 21:00:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:38.990 21:00:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:38.990 21:00:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:38.990 21:00:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.990 21:00:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.990 21:00:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:38.990 21:00:52 -- bdev/nbd_common.sh@41 -- # break 00:09:38.990 21:00:52 -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.990 21:00:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.990 21:00:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:39.248 21:00:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:39.248 21:00:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:39.248 21:00:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:39.248 21:00:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.248 21:00:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.248 21:00:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:39.248 21:00:53 -- bdev/nbd_common.sh@41 -- # break 00:09:39.248 21:00:53 -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.248 21:00:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.248 21:00:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:39.506 21:00:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:39.506 21:00:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:39.506 21:00:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:39.506 21:00:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.506 21:00:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.506 21:00:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:39.506 21:00:53 -- bdev/nbd_common.sh@41 -- # break 00:09:39.506 21:00:53 -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.506 21:00:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:39.506 21:00:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.506 21:00:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@65 -- # true 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@65 -- # count=0 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@104 -- # count=0 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@109 -- # return 0 00:09:39.764 21:00:53 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:39.764 21:00:53 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:40.023 malloc_lvol_verify 00:09:40.023 21:00:53 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:40.281 cbdb3a62-061e-4fc4-a669-a1a59066326f 00:09:40.281 21:00:54 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:40.540 657b0b22-68f8-4f58-ac4f-a9ae2f8dc4d6 00:09:40.540 21:00:54 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:40.798 /dev/nbd0 00:09:40.798 21:00:54 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:40.798 mke2fs 1.46.5 (30-Dec-2021) 00:09:40.798 Discarding device blocks: 0/4096 done 00:09:40.798 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:40.798 00:09:40.798 Allocating group tables: 0/1 done 00:09:40.798 Writing inode tables: 0/1 done 00:09:40.798 Creating journal (1024 blocks): done 00:09:40.798 Writing superblocks and filesystem accounting information: 0/1 done 00:09:40.798 00:09:40.798 21:00:54 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:40.798 21:00:54 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:40.798 21:00:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.798 21:00:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:40.798 21:00:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:40.798 21:00:54 -- bdev/nbd_common.sh@51 -- # local i 00:09:40.798 21:00:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.798 21:00:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:41.057 21:00:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:41.057 21:00:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:41.057 21:00:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:41.057 21:00:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.057 21:00:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.057 21:00:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:41.057 21:00:54 -- bdev/nbd_common.sh@41 -- # break 00:09:41.057 21:00:54 -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.057 21:00:54 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:41.057 21:00:54 -- bdev/nbd_common.sh@147 -- # return 0 00:09:41.057 21:00:54 -- bdev/blockdev.sh@324 -- # killprocess 62528 00:09:41.057 21:00:54 -- common/autotest_common.sh@926 -- # '[' -z 62528 ']' 00:09:41.057 21:00:54 -- common/autotest_common.sh@930 -- # kill -0 62528 00:09:41.057 21:00:54 -- common/autotest_common.sh@931 -- # uname 00:09:41.057 21:00:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:41.057 21:00:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62528 00:09:41.315 21:00:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:41.315 21:00:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:41.315 killing process with pid 62528 00:09:41.315 21:00:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62528' 00:09:41.315 21:00:54 -- common/autotest_common.sh@945 -- # kill 62528 00:09:41.315 21:00:54 -- common/autotest_common.sh@950 -- # wait 62528 00:09:42.250 21:00:56 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:09:42.251 00:09:42.251 real 0m13.999s 00:09:42.251 user 0m19.384s 00:09:42.251 sys 0m4.465s 00:09:42.251 21:00:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.251 21:00:56 -- common/autotest_common.sh@10 -- # set +x 00:09:42.251 ************************************ 00:09:42.251 END TEST bdev_nbd 00:09:42.251 ************************************ 00:09:42.251 21:00:56 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:09:42.251 21:00:56 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:09:42.251 21:00:56 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:09:42.251 skipping fio tests on NVMe due to multi-ns failures. 00:09:42.251 21:00:56 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:42.251 21:00:56 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:42.251 21:00:56 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:42.251 21:00:56 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:09:42.251 21:00:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:42.251 21:00:56 -- common/autotest_common.sh@10 -- # set +x 00:09:42.251 ************************************ 00:09:42.251 START TEST bdev_verify 00:09:42.251 ************************************ 00:09:42.251 21:00:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:42.510 [2024-07-13 21:00:56.187909] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:42.510 [2024-07-13 21:00:56.188098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62977 ] 00:09:42.510 [2024-07-13 21:00:56.360667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.769 [2024-07-13 21:00:56.567012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.769 [2024-07-13 21:00:56.567025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.335 Running I/O for 5 seconds... 00:09:48.603 00:09:48.603 Latency(us) 00:09:48.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.603 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.603 Verification LBA range: start 0x0 length 0x5e800 00:09:48.603 Nvme0n1p1 : 5.05 2397.76 9.37 0.00 0.00 53206.26 10902.81 61961.31 00:09:48.603 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.603 Verification LBA range: start 0x5e800 length 0x5e800 00:09:48.603 Nvme0n1p1 : 5.04 2398.09 9.37 0.00 0.00 53207.59 9413.35 60769.75 00:09:48.603 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.603 Verification LBA range: start 0x0 length 0x5e7ff 00:09:48.603 Nvme0n1p2 : 5.05 2401.23 9.38 0.00 0.00 53107.24 5034.36 59339.87 00:09:48.603 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.603 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:48.603 Nvme0n1p2 : 5.05 2401.42 9.38 0.00 0.00 53104.76 5510.98 57909.99 00:09:48.603 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.603 Verification LBA range: start 0x0 length 0xa0000 00:09:48.603 Nvme1n1 : 5.06 2399.97 9.37 0.00 0.00 53037.65 6821.70 52428.80 00:09:48.603 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.603 Verification LBA range: start 0xa0000 length 0xa0000 00:09:48.603 Nvme1n1 : 5.05 2400.25 9.38 0.00 0.00 53016.78 7060.01 50045.67 00:09:48.603 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.603 Verification LBA range: start 0x0 length 0x80000 00:09:48.604 Nvme2n1 : 5.06 2398.72 9.37 0.00 0.00 52957.32 8460.10 47185.92 00:09:48.604 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.604 Verification LBA range: start 0x80000 length 0x80000 00:09:48.604 Nvme2n1 : 5.06 2399.02 9.37 0.00 0.00 52961.61 8460.10 46232.67 00:09:48.604 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.604 Verification LBA range: start 0x0 length 0x80000 00:09:48.604 Nvme2n2 : 5.06 2404.99 9.39 0.00 0.00 52806.82 2412.92 44087.85 00:09:48.604 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.604 Verification LBA range: start 0x80000 length 0x80000 00:09:48.604 Nvme2n2 : 5.06 2397.82 9.37 0.00 0.00 52929.10 10009.13 45279.42 00:09:48.604 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.604 Verification LBA range: start 0x0 length 0x80000 00:09:48.604 Nvme2n3 : 5.07 2403.67 9.39 0.00 0.00 52771.20 4289.63 44564.48 00:09:48.604 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.604 Verification LBA range: start 0x80000 length 0x80000 00:09:48.604 Nvme2n3 : 5.06 2404.87 9.39 0.00 0.00 52786.74 1362.85 46470.98 00:09:48.604 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.604 Verification LBA range: start 0x0 length 0x20000 00:09:48.604 Nvme3n1 : 5.07 2402.35 9.38 0.00 0.00 52741.32 5987.61 44564.48 00:09:48.604 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.604 Verification LBA range: start 0x20000 length 0x20000 00:09:48.604 Nvme3n1 : 5.07 2403.53 9.39 0.00 0.00 52758.07 3127.85 45994.36 00:09:48.604 =================================================================================================================== 00:09:48.604 Total : 33613.70 131.30 0.00 0.00 52956.26 1362.85 61961.31 00:09:51.137 00:09:51.137 real 0m8.527s 00:09:51.137 user 0m15.754s 00:09:51.137 sys 0m0.266s 00:09:51.137 21:01:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.137 ************************************ 00:09:51.137 END TEST bdev_verify 00:09:51.137 21:01:04 -- common/autotest_common.sh@10 -- # set +x 00:09:51.137 ************************************ 00:09:51.138 21:01:04 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:51.138 21:01:04 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:09:51.138 21:01:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:51.138 21:01:04 -- common/autotest_common.sh@10 -- # set +x 00:09:51.138 ************************************ 00:09:51.138 START TEST bdev_verify_big_io 00:09:51.138 ************************************ 00:09:51.138 21:01:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:51.138 [2024-07-13 21:01:04.770068] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:51.138 [2024-07-13 21:01:04.770246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63086 ] 00:09:51.138 [2024-07-13 21:01:04.941034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:51.397 [2024-07-13 21:01:05.107285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.397 [2024-07-13 21:01:05.107301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.965 Running I/O for 5 seconds... 00:09:58.531 00:09:58.531 Latency(us) 00:09:58.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.531 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x0 length 0x5e80 00:09:58.531 Nvme0n1p1 : 5.42 224.02 14.00 0.00 0.00 563408.98 33840.41 735909.70 00:09:58.531 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x5e80 length 0x5e80 00:09:58.531 Nvme0n1p1 : 5.42 223.90 13.99 0.00 0.00 564235.22 31457.28 735909.70 00:09:58.531 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x0 length 0x5e7f 00:09:58.531 Nvme0n1p2 : 5.42 223.86 13.99 0.00 0.00 556840.85 35746.91 690153.66 00:09:58.531 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:58.531 Nvme0n1p2 : 5.42 223.80 13.99 0.00 0.00 556928.30 32172.22 686340.65 00:09:58.531 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x0 length 0xa000 00:09:58.531 Nvme1n1 : 5.43 223.75 13.98 0.00 0.00 549495.95 36461.85 636771.61 00:09:58.531 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0xa000 length 0xa000 00:09:58.531 Nvme1n1 : 5.43 223.72 13.98 0.00 0.00 549415.07 32410.53 629145.60 00:09:58.531 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x0 length 0x8000 00:09:58.531 Nvme2n1 : 5.43 223.62 13.98 0.00 0.00 541945.72 37415.10 579576.55 00:09:58.531 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x8000 length 0x8000 00:09:58.531 Nvme2n1 : 5.43 223.60 13.98 0.00 0.00 541790.64 33602.09 602454.57 00:09:58.531 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x0 length 0x8000 00:09:58.531 Nvme2n2 : 5.47 229.26 14.33 0.00 0.00 521721.72 32172.22 606267.58 00:09:58.531 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x8000 length 0x8000 00:09:58.531 Nvme2n2 : 5.43 223.45 13.97 0.00 0.00 534290.66 35746.91 640584.61 00:09:58.531 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x0 length 0x8000 00:09:58.531 Nvme2n3 : 5.47 229.17 14.32 0.00 0.00 514604.10 32648.84 617706.59 00:09:58.531 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x8000 length 0x8000 00:09:58.531 Nvme2n3 : 5.47 229.22 14.33 0.00 0.00 513437.07 30504.03 652023.62 00:09:58.531 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x0 length 0x2000 00:09:58.531 Nvme3n1 : 5.48 246.13 15.38 0.00 0.00 475830.69 6702.55 823608.79 00:09:58.531 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.531 Verification LBA range: start 0x2000 length 0x2000 00:09:58.531 Nvme3n1 : 5.48 246.23 15.39 0.00 0.00 474404.99 5540.77 827421.79 00:09:58.531 =================================================================================================================== 00:09:58.531 Total : 3193.72 199.61 0.00 0.00 531735.57 5540.77 827421.79 00:09:59.466 00:09:59.466 real 0m8.459s 00:09:59.466 user 0m15.631s 00:09:59.466 sys 0m0.272s 00:09:59.466 21:01:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.466 ************************************ 00:09:59.466 END TEST bdev_verify_big_io 00:09:59.466 ************************************ 00:09:59.466 21:01:13 -- common/autotest_common.sh@10 -- # set +x 00:09:59.466 21:01:13 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:59.466 21:01:13 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:59.466 21:01:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:59.466 21:01:13 -- common/autotest_common.sh@10 -- # set +x 00:09:59.466 ************************************ 00:09:59.466 START TEST bdev_write_zeroes 00:09:59.466 ************************************ 00:09:59.466 21:01:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:59.466 [2024-07-13 21:01:13.279191] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:59.466 [2024-07-13 21:01:13.279384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63200 ] 00:09:59.724 [2024-07-13 21:01:13.448795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.724 [2024-07-13 21:01:13.623835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.658 Running I/O for 1 seconds... 00:10:01.591 00:10:01.591 Latency(us) 00:10:01.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.591 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:01.591 Nvme0n1p1 : 1.02 7222.72 28.21 0.00 0.00 17654.83 12451.84 34078.72 00:10:01.591 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:01.591 Nvme0n1p2 : 1.02 7212.62 28.17 0.00 0.00 17647.05 12868.89 35031.97 00:10:01.591 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:01.591 Nvme1n1 : 1.02 7203.75 28.14 0.00 0.00 17613.53 13107.20 33602.09 00:10:01.591 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:01.591 Nvme2n1 : 1.03 7242.15 28.29 0.00 0.00 17451.19 9532.51 29074.15 00:10:01.591 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:01.591 Nvme2n2 : 1.03 7232.56 28.25 0.00 0.00 17436.80 9889.98 29074.15 00:10:01.591 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:01.591 Nvme2n3 : 1.03 7223.81 28.22 0.00 0.00 17391.56 10187.87 28478.37 00:10:01.591 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:01.591 Nvme3n1 : 1.03 7270.38 28.40 0.00 0.00 17262.08 5153.51 28478.37 00:10:01.591 =================================================================================================================== 00:10:01.591 Total : 50607.99 197.69 0.00 0.00 17493.04 5153.51 35031.97 00:10:02.529 00:10:02.529 real 0m3.155s 00:10:02.529 user 0m2.812s 00:10:02.529 sys 0m0.220s 00:10:02.529 21:01:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.529 21:01:16 -- common/autotest_common.sh@10 -- # set +x 00:10:02.529 ************************************ 00:10:02.529 END TEST bdev_write_zeroes 00:10:02.529 ************************************ 00:10:02.529 21:01:16 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:02.529 21:01:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:02.529 21:01:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:02.529 21:01:16 -- common/autotest_common.sh@10 -- # set +x 00:10:02.529 ************************************ 00:10:02.529 START TEST bdev_json_nonenclosed 00:10:02.529 ************************************ 00:10:02.529 21:01:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:02.788 [2024-07-13 21:01:16.470664] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:02.788 [2024-07-13 21:01:16.470816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63249 ] 00:10:02.788 [2024-07-13 21:01:16.627491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.047 [2024-07-13 21:01:16.796965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.047 [2024-07-13 21:01:16.797166] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:03.047 [2024-07-13 21:01:16.797192] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:03.306 00:10:03.306 real 0m0.785s 00:10:03.306 user 0m0.572s 00:10:03.306 sys 0m0.107s 00:10:03.307 21:01:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.307 ************************************ 00:10:03.307 END TEST bdev_json_nonenclosed 00:10:03.307 21:01:17 -- common/autotest_common.sh@10 -- # set +x 00:10:03.307 ************************************ 00:10:03.307 21:01:17 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:03.307 21:01:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:03.307 21:01:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:03.307 21:01:17 -- common/autotest_common.sh@10 -- # set +x 00:10:03.307 ************************************ 00:10:03.307 START TEST bdev_json_nonarray 00:10:03.307 ************************************ 00:10:03.307 21:01:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:03.566 [2024-07-13 21:01:17.323212] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:03.566 [2024-07-13 21:01:17.323436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63280 ] 00:10:03.825 [2024-07-13 21:01:17.494611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.825 [2024-07-13 21:01:17.681920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.825 [2024-07-13 21:01:17.682160] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:03.825 [2024-07-13 21:01:17.682190] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:04.393 00:10:04.393 real 0m0.837s 00:10:04.393 user 0m0.610s 00:10:04.393 sys 0m0.121s 00:10:04.393 21:01:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.393 21:01:18 -- common/autotest_common.sh@10 -- # set +x 00:10:04.393 ************************************ 00:10:04.394 END TEST bdev_json_nonarray 00:10:04.394 ************************************ 00:10:04.394 21:01:18 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:10:04.394 21:01:18 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:10:04.394 21:01:18 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:04.394 21:01:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:04.394 21:01:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:04.394 21:01:18 -- common/autotest_common.sh@10 -- # set +x 00:10:04.394 ************************************ 00:10:04.394 START TEST bdev_gpt_uuid 00:10:04.394 ************************************ 00:10:04.394 21:01:18 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:10:04.394 21:01:18 -- bdev/blockdev.sh@612 -- # local bdev 00:10:04.394 21:01:18 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:10:04.394 21:01:18 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=63311 00:10:04.394 21:01:18 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:04.394 21:01:18 -- bdev/blockdev.sh@47 -- # waitforlisten 63311 00:10:04.394 21:01:18 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:04.394 21:01:18 -- common/autotest_common.sh@819 -- # '[' -z 63311 ']' 00:10:04.394 21:01:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.394 21:01:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:04.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.394 21:01:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.394 21:01:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:04.394 21:01:18 -- common/autotest_common.sh@10 -- # set +x 00:10:04.394 [2024-07-13 21:01:18.237378] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:04.394 [2024-07-13 21:01:18.237573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63311 ] 00:10:04.652 [2024-07-13 21:01:18.408672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.912 [2024-07-13 21:01:18.578574] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:04.912 [2024-07-13 21:01:18.578848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.289 21:01:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:06.289 21:01:19 -- common/autotest_common.sh@852 -- # return 0 00:10:06.289 21:01:19 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:06.289 21:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:06.289 21:01:19 -- common/autotest_common.sh@10 -- # set +x 00:10:06.548 Some configs were skipped because the RPC state that can call them passed over. 00:10:06.548 21:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:06.548 21:01:20 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:10:06.548 21:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:06.548 21:01:20 -- common/autotest_common.sh@10 -- # set +x 00:10:06.548 21:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:06.548 21:01:20 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:06.548 21:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:06.548 21:01:20 -- common/autotest_common.sh@10 -- # set +x 00:10:06.548 21:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:06.548 21:01:20 -- bdev/blockdev.sh@619 -- # bdev='[ 00:10:06.548 { 00:10:06.548 "name": "Nvme0n1p1", 00:10:06.548 "aliases": [ 00:10:06.548 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:06.548 ], 00:10:06.548 "product_name": "GPT Disk", 00:10:06.548 "block_size": 4096, 00:10:06.548 "num_blocks": 774144, 00:10:06.548 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:06.548 "md_size": 64, 00:10:06.548 "md_interleave": false, 00:10:06.548 "dif_type": 0, 00:10:06.548 "assigned_rate_limits": { 00:10:06.548 "rw_ios_per_sec": 0, 00:10:06.548 "rw_mbytes_per_sec": 0, 00:10:06.548 "r_mbytes_per_sec": 0, 00:10:06.548 "w_mbytes_per_sec": 0 00:10:06.548 }, 00:10:06.548 "claimed": false, 00:10:06.548 "zoned": false, 00:10:06.548 "supported_io_types": { 00:10:06.548 "read": true, 00:10:06.548 "write": true, 00:10:06.548 "unmap": true, 00:10:06.548 "write_zeroes": true, 00:10:06.548 "flush": true, 00:10:06.548 "reset": true, 00:10:06.548 "compare": true, 00:10:06.548 "compare_and_write": false, 00:10:06.548 "abort": true, 00:10:06.548 "nvme_admin": false, 00:10:06.548 "nvme_io": false 00:10:06.548 }, 00:10:06.548 "driver_specific": { 00:10:06.548 "gpt": { 00:10:06.548 "base_bdev": "Nvme0n1", 00:10:06.548 "offset_blocks": 256, 00:10:06.548 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:06.548 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:06.548 "partition_name": "SPDK_TEST_first" 00:10:06.548 } 00:10:06.548 } 00:10:06.548 } 00:10:06.548 ]' 00:10:06.548 21:01:20 -- bdev/blockdev.sh@620 -- # jq -r length 00:10:06.548 21:01:20 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:10:06.548 21:01:20 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:10:06.548 21:01:20 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:06.548 21:01:20 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:06.548 21:01:20 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:06.548 21:01:20 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:06.548 21:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:06.548 21:01:20 -- common/autotest_common.sh@10 -- # set +x 00:10:06.548 21:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:06.548 21:01:20 -- bdev/blockdev.sh@624 -- # bdev='[ 00:10:06.548 { 00:10:06.548 "name": "Nvme0n1p2", 00:10:06.548 "aliases": [ 00:10:06.548 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:06.548 ], 00:10:06.548 "product_name": "GPT Disk", 00:10:06.548 "block_size": 4096, 00:10:06.548 "num_blocks": 774143, 00:10:06.548 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:06.548 "md_size": 64, 00:10:06.548 "md_interleave": false, 00:10:06.548 "dif_type": 0, 00:10:06.548 "assigned_rate_limits": { 00:10:06.548 "rw_ios_per_sec": 0, 00:10:06.548 "rw_mbytes_per_sec": 0, 00:10:06.548 "r_mbytes_per_sec": 0, 00:10:06.548 "w_mbytes_per_sec": 0 00:10:06.548 }, 00:10:06.548 "claimed": false, 00:10:06.548 "zoned": false, 00:10:06.548 "supported_io_types": { 00:10:06.548 "read": true, 00:10:06.548 "write": true, 00:10:06.548 "unmap": true, 00:10:06.548 "write_zeroes": true, 00:10:06.548 "flush": true, 00:10:06.548 "reset": true, 00:10:06.548 "compare": true, 00:10:06.548 "compare_and_write": false, 00:10:06.548 "abort": true, 00:10:06.548 "nvme_admin": false, 00:10:06.548 "nvme_io": false 00:10:06.548 }, 00:10:06.548 "driver_specific": { 00:10:06.548 "gpt": { 00:10:06.548 "base_bdev": "Nvme0n1", 00:10:06.548 "offset_blocks": 774400, 00:10:06.548 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:06.548 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:06.548 "partition_name": "SPDK_TEST_second" 00:10:06.548 } 00:10:06.548 } 00:10:06.548 } 00:10:06.548 ]' 00:10:06.548 21:01:20 -- bdev/blockdev.sh@625 -- # jq -r length 00:10:06.807 21:01:20 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:10:06.807 21:01:20 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:10:06.807 21:01:20 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:06.807 21:01:20 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:06.807 21:01:20 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:06.807 21:01:20 -- bdev/blockdev.sh@629 -- # killprocess 63311 00:10:06.807 21:01:20 -- common/autotest_common.sh@926 -- # '[' -z 63311 ']' 00:10:06.807 21:01:20 -- common/autotest_common.sh@930 -- # kill -0 63311 00:10:06.807 21:01:20 -- common/autotest_common.sh@931 -- # uname 00:10:06.808 21:01:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:06.808 21:01:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63311 00:10:06.808 killing process with pid 63311 00:10:06.808 21:01:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:06.808 21:01:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:06.808 21:01:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63311' 00:10:06.808 21:01:20 -- common/autotest_common.sh@945 -- # kill 63311 00:10:06.808 21:01:20 -- common/autotest_common.sh@950 -- # wait 63311 00:10:08.712 ************************************ 00:10:08.712 END TEST bdev_gpt_uuid 00:10:08.712 ************************************ 00:10:08.712 00:10:08.712 real 0m4.413s 00:10:08.712 user 0m4.979s 00:10:08.712 sys 0m0.449s 00:10:08.712 21:01:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.712 21:01:22 -- common/autotest_common.sh@10 -- # set +x 00:10:08.712 21:01:22 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:10:08.712 21:01:22 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:10:08.712 21:01:22 -- bdev/blockdev.sh@809 -- # cleanup 00:10:08.712 21:01:22 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:08.712 21:01:22 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:08.712 21:01:22 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:10:08.712 21:01:22 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:10:08.712 21:01:22 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:10:08.712 21:01:22 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:09.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:09.280 Waiting for block devices as requested 00:10:09.280 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.537 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.537 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.537 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:14.805 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:14.805 21:01:28 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:10:14.805 21:01:28 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:10:15.064 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:15.064 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:10:15.064 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:15.064 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:10:15.064 21:01:28 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:10:15.064 00:10:15.064 real 1m5.621s 00:10:15.064 user 1m25.374s 00:10:15.064 sys 0m9.603s 00:10:15.064 21:01:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.064 ************************************ 00:10:15.064 END TEST blockdev_nvme_gpt 00:10:15.064 ************************************ 00:10:15.064 21:01:28 -- common/autotest_common.sh@10 -- # set +x 00:10:15.064 21:01:28 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:15.064 21:01:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:15.064 21:01:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:15.064 21:01:28 -- common/autotest_common.sh@10 -- # set +x 00:10:15.064 ************************************ 00:10:15.064 START TEST nvme 00:10:15.064 ************************************ 00:10:15.064 21:01:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:15.064 * Looking for test storage... 00:10:15.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:15.064 21:01:28 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:15.999 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:16.257 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.257 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.257 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.258 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.258 21:01:30 -- nvme/nvme.sh@79 -- # uname 00:10:16.258 21:01:30 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:16.258 21:01:30 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:16.258 21:01:30 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:16.258 21:01:30 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:16.258 21:01:30 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:10:16.258 21:01:30 -- common/autotest_common.sh@1045 -- # echo 0 00:10:16.258 Waiting for stub to ready for secondary processes... 00:10:16.258 21:01:30 -- common/autotest_common.sh@1047 -- # stubpid=63977 00:10:16.258 21:01:30 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:16.258 21:01:30 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:10:16.258 21:01:30 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:16.258 21:01:30 -- common/autotest_common.sh@1051 -- # [[ -e /proc/63977 ]] 00:10:16.258 21:01:30 -- common/autotest_common.sh@1052 -- # sleep 1s 00:10:16.516 [2024-07-13 21:01:30.193749] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:16.516 [2024-07-13 21:01:30.194109] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.083 [2024-07-13 21:01:30.909499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.343 [2024-07-13 21:01:31.117552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.343 [2024-07-13 21:01:31.117651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.343 [2024-07-13 21:01:31.117660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.343 [2024-07-13 21:01:31.140760] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:17.343 [2024-07-13 21:01:31.153157] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:17.343 [2024-07-13 21:01:31.153346] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:17.343 21:01:31 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:17.343 21:01:31 -- common/autotest_common.sh@1051 -- # [[ -e /proc/63977 ]] 00:10:17.343 21:01:31 -- common/autotest_common.sh@1052 -- # sleep 1s 00:10:17.343 [2024-07-13 21:01:31.162805] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:17.343 [2024-07-13 21:01:31.163024] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:17.343 [2024-07-13 21:01:31.163299] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:17.343 [2024-07-13 21:01:31.172919] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:17.343 [2024-07-13 21:01:31.173261] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:17.343 [2024-07-13 21:01:31.173425] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:17.343 [2024-07-13 21:01:31.182802] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:17.343 [2024-07-13 21:01:31.183051] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:17.343 [2024-07-13 21:01:31.183207] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:17.343 [2024-07-13 21:01:31.183379] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:17.343 [2024-07-13 21:01:31.183655] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:18.277 done. 00:10:18.277 21:01:32 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:18.277 21:01:32 -- common/autotest_common.sh@1054 -- # echo done. 00:10:18.277 21:01:32 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:18.277 21:01:32 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:18.277 21:01:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:18.277 21:01:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.277 ************************************ 00:10:18.277 START TEST nvme_reset 00:10:18.277 ************************************ 00:10:18.277 21:01:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:18.535 Initializing NVMe Controllers 00:10:18.535 Skipping QEMU NVMe SSD at 0000:00:06.0 00:10:18.535 Skipping QEMU NVMe SSD at 0000:00:07.0 00:10:18.535 Skipping QEMU NVMe SSD at 0000:00:09.0 00:10:18.535 Skipping QEMU NVMe SSD at 0000:00:08.0 00:10:18.535 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:18.535 00:10:18.535 real 0m0.257s 00:10:18.535 ************************************ 00:10:18.535 END TEST nvme_reset 00:10:18.535 ************************************ 00:10:18.535 user 0m0.089s 00:10:18.535 sys 0m0.121s 00:10:18.535 21:01:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.535 21:01:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.793 21:01:32 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:18.793 21:01:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:18.793 21:01:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:18.793 21:01:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.793 ************************************ 00:10:18.793 START TEST nvme_identify 00:10:18.793 ************************************ 00:10:18.793 21:01:32 -- common/autotest_common.sh@1104 -- # nvme_identify 00:10:18.793 21:01:32 -- nvme/nvme.sh@12 -- # bdfs=() 00:10:18.793 21:01:32 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:18.793 21:01:32 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:18.793 21:01:32 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:18.793 21:01:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:18.793 21:01:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:18.793 21:01:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:18.793 21:01:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:18.793 21:01:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:18.793 21:01:32 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:18.793 21:01:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:18.793 21:01:32 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:19.053 [2024-07-13 21:01:32.791748] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 64019 terminated unexpected 00:10:19.053 ===================================================== 00:10:19.053 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:19.053 ===================================================== 00:10:19.053 Controller Capabilities/Features 00:10:19.053 ================================ 00:10:19.053 Vendor ID: 1b36 00:10:19.053 Subsystem Vendor ID: 1af4 00:10:19.053 Serial Number: 12340 00:10:19.053 Model Number: QEMU NVMe Ctrl 00:10:19.053 Firmware Version: 8.0.0 00:10:19.053 Recommended Arb Burst: 6 00:10:19.053 IEEE OUI Identifier: 00 54 52 00:10:19.053 Multi-path I/O 00:10:19.053 May have multiple subsystem ports: No 00:10:19.053 May have multiple controllers: No 00:10:19.053 Associated with SR-IOV VF: No 00:10:19.053 Max Data Transfer Size: 524288 00:10:19.053 Max Number of Namespaces: 256 00:10:19.053 Max Number of I/O Queues: 64 00:10:19.053 NVMe Specification Version (VS): 1.4 00:10:19.053 NVMe Specification Version (Identify): 1.4 00:10:19.053 Maximum Queue Entries: 2048 00:10:19.053 Contiguous Queues Required: Yes 00:10:19.053 Arbitration Mechanisms Supported 00:10:19.053 Weighted Round Robin: Not Supported 00:10:19.053 Vendor Specific: Not Supported 00:10:19.053 Reset Timeout: 7500 ms 00:10:19.053 Doorbell Stride: 4 bytes 00:10:19.053 NVM Subsystem Reset: Not Supported 00:10:19.053 Command Sets Supported 00:10:19.053 NVM Command Set: Supported 00:10:19.053 Boot Partition: Not Supported 00:10:19.053 Memory Page Size Minimum: 4096 bytes 00:10:19.053 Memory Page Size Maximum: 65536 bytes 00:10:19.053 Persistent Memory Region: Not Supported 00:10:19.053 Optional Asynchronous Events Supported 00:10:19.053 Namespace Attribute Notices: Supported 00:10:19.053 Firmware Activation Notices: Not Supported 00:10:19.053 ANA Change Notices: Not Supported 00:10:19.053 PLE Aggregate Log Change Notices: Not Supported 00:10:19.053 LBA Status Info Alert Notices: Not Supported 00:10:19.053 EGE Aggregate Log Change Notices: Not Supported 00:10:19.053 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.053 Zone Descriptor Change Notices: Not Supported 00:10:19.053 Discovery Log Change Notices: Not Supported 00:10:19.053 Controller Attributes 00:10:19.053 128-bit Host Identifier: Not Supported 00:10:19.053 Non-Operational Permissive Mode: Not Supported 00:10:19.053 NVM Sets: Not Supported 00:10:19.053 Read Recovery Levels: Not Supported 00:10:19.053 Endurance Groups: Not Supported 00:10:19.053 Predictable Latency Mode: Not Supported 00:10:19.053 Traffic Based Keep ALive: Not Supported 00:10:19.053 Namespace Granularity: Not Supported 00:10:19.053 SQ Associations: Not Supported 00:10:19.053 UUID List: Not Supported 00:10:19.053 Multi-Domain Subsystem: Not Supported 00:10:19.053 Fixed Capacity Management: Not Supported 00:10:19.053 Variable Capacity Management: Not Supported 00:10:19.053 Delete Endurance Group: Not Supported 00:10:19.053 Delete NVM Set: Not Supported 00:10:19.053 Extended LBA Formats Supported: Supported 00:10:19.053 Flexible Data Placement Supported: Not Supported 00:10:19.053 00:10:19.053 Controller Memory Buffer Support 00:10:19.053 ================================ 00:10:19.053 Supported: No 00:10:19.053 00:10:19.053 Persistent Memory Region Support 00:10:19.053 ================================ 00:10:19.053 Supported: No 00:10:19.053 00:10:19.053 Admin Command Set Attributes 00:10:19.053 ============================ 00:10:19.053 Security Send/Receive: Not Supported 00:10:19.053 Format NVM: Supported 00:10:19.053 Firmware Activate/Download: Not Supported 00:10:19.053 Namespace Management: Supported 00:10:19.053 Device Self-Test: Not Supported 00:10:19.053 Directives: Supported 00:10:19.053 NVMe-MI: Not Supported 00:10:19.053 Virtualization Management: Not Supported 00:10:19.053 Doorbell Buffer Config: Supported 00:10:19.053 Get LBA Status Capability: Not Supported 00:10:19.053 Command & Feature Lockdown Capability: Not Supported 00:10:19.053 Abort Command Limit: 4 00:10:19.053 Async Event Request Limit: 4 00:10:19.053 Number of Firmware Slots: N/A 00:10:19.053 Firmware Slot 1 Read-Only: N/A 00:10:19.053 Firmware Activation Without Reset: N/A 00:10:19.053 Multiple Update Detection Support: N/A 00:10:19.053 Firmware Update Granularity: No Information Provided 00:10:19.053 Per-Namespace SMART Log: Yes 00:10:19.053 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.053 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:19.053 Command Effects Log Page: Supported 00:10:19.053 Get Log Page Extended Data: Supported 00:10:19.053 Telemetry Log Pages: Not Supported 00:10:19.053 Persistent Event Log Pages: Not Supported 00:10:19.053 Supported Log Pages Log Page: May Support 00:10:19.053 Commands Supported & Effects Log Page: Not Supported 00:10:19.053 Feature Identifiers & Effects Log Page:May Support 00:10:19.053 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.053 Data Area 4 for Telemetry Log: Not Supported 00:10:19.053 Error Log Page Entries Supported: 1 00:10:19.053 Keep Alive: Not Supported 00:10:19.053 00:10:19.053 NVM Command Set Attributes 00:10:19.053 ========================== 00:10:19.053 Submission Queue Entry Size 00:10:19.053 Max: 64 00:10:19.053 Min: 64 00:10:19.053 Completion Queue Entry Size 00:10:19.053 Max: 16 00:10:19.054 Min: 16 00:10:19.054 Number of Namespaces: 256 00:10:19.054 Compare Command: Supported 00:10:19.054 Write Uncorrectable Command: Not Supported 00:10:19.054 Dataset Management Command: Supported 00:10:19.054 Write Zeroes Command: Supported 00:10:19.054 Set Features Save Field: Supported 00:10:19.054 Reservations: Not Supported 00:10:19.054 Timestamp: Supported 00:10:19.054 Copy: Supported 00:10:19.054 Volatile Write Cache: Present 00:10:19.054 Atomic Write Unit (Normal): 1 00:10:19.054 Atomic Write Unit (PFail): 1 00:10:19.054 Atomic Compare & Write Unit: 1 00:10:19.054 Fused Compare & Write: Not Supported 00:10:19.054 Scatter-Gather List 00:10:19.054 SGL Command Set: Supported 00:10:19.054 SGL Keyed: Not Supported 00:10:19.054 SGL Bit Bucket Descriptor: Not Supported 00:10:19.054 SGL Metadata Pointer: Not Supported 00:10:19.054 Oversized SGL: Not Supported 00:10:19.054 SGL Metadata Address: Not Supported 00:10:19.054 SGL Offset: Not Supported 00:10:19.054 Transport SGL Data Block: Not Supported 00:10:19.054 Replay Protected Memory Block: Not Supported 00:10:19.054 00:10:19.054 Firmware Slot Information 00:10:19.054 ========================= 00:10:19.054 Active slot: 1 00:10:19.054 Slot 1 Firmware Revision: 1.0 00:10:19.054 00:10:19.054 00:10:19.054 Commands Supported and Effects 00:10:19.054 ============================== 00:10:19.054 Admin Commands 00:10:19.054 -------------- 00:10:19.054 Delete I/O Submission Queue (00h): Supported 00:10:19.054 Create I/O Submission Queue (01h): Supported 00:10:19.054 Get Log Page (02h): Supported 00:10:19.054 Delete I/O Completion Queue (04h): Supported 00:10:19.054 Create I/O Completion Queue (05h): Supported 00:10:19.054 Identify (06h): Supported 00:10:19.054 Abort (08h): Supported 00:10:19.054 Set Features (09h): Supported 00:10:19.054 Get Features (0Ah): Supported 00:10:19.054 Asynchronous Event Request (0Ch): Supported 00:10:19.054 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.054 Directive Send (19h): Supported 00:10:19.054 Directive Receive (1Ah): Supported 00:10:19.054 Virtualization Management (1Ch): Supported 00:10:19.054 Doorbell Buffer Config (7Ch): Supported 00:10:19.054 Format NVM (80h): Supported LBA-Change 00:10:19.054 I/O Commands 00:10:19.054 ------------ 00:10:19.054 Flush (00h): Supported LBA-Change 00:10:19.054 Write (01h): Supported LBA-Change 00:10:19.054 Read (02h): Supported 00:10:19.054 Compare (05h): Supported 00:10:19.054 Write Zeroes (08h): Supported LBA-Change 00:10:19.054 Dataset Management (09h): Supported LBA-Change 00:10:19.054 Unknown (0Ch): Supported 00:10:19.054 Unknown (12h): Supported 00:10:19.054 Copy (19h): Supported LBA-Change 00:10:19.054 Unknown (1Dh): Supported LBA-Change 00:10:19.054 00:10:19.054 Error Log 00:10:19.054 ========= 00:10:19.054 00:10:19.054 Arbitration 00:10:19.054 =========== 00:10:19.054 Arbitration Burst: no limit 00:10:19.054 00:10:19.054 Power Management 00:10:19.054 ================ 00:10:19.054 Number of Power States: 1 00:10:19.054 Current Power State: Power State #0 00:10:19.054 Power State #0: 00:10:19.054 Max Power: 25.00 W 00:10:19.054 Non-Operational State: Operational 00:10:19.054 Entry Latency: 16 microseconds 00:10:19.054 Exit Latency: 4 microseconds 00:10:19.054 Relative Read Throughput: 0 00:10:19.054 Relative Read Latency: 0 00:10:19.054 Relative Write Throughput: 0 00:10:19.054 Relative Write Latency: 0 00:10:19.054 Idle Power[2024-07-13 21:01:32.793334] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 64019 terminated unexpected 00:10:19.054 : Not Reported 00:10:19.054 Active Power: Not Reported 00:10:19.054 Non-Operational Permissive Mode: Not Supported 00:10:19.054 00:10:19.054 Health Information 00:10:19.054 ================== 00:10:19.054 Critical Warnings: 00:10:19.054 Available Spare Space: OK 00:10:19.054 Temperature: OK 00:10:19.054 Device Reliability: OK 00:10:19.054 Read Only: No 00:10:19.054 Volatile Memory Backup: OK 00:10:19.054 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.054 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.054 Available Spare: 0% 00:10:19.054 Available Spare Threshold: 0% 00:10:19.054 Life Percentage Used: 0% 00:10:19.054 Data Units Read: 1740 00:10:19.054 Data Units Written: 802 00:10:19.054 Host Read Commands: 87420 00:10:19.054 Host Write Commands: 43384 00:10:19.054 Controller Busy Time: 0 minutes 00:10:19.054 Power Cycles: 0 00:10:19.054 Power On Hours: 0 hours 00:10:19.054 Unsafe Shutdowns: 0 00:10:19.054 Unrecoverable Media Errors: 0 00:10:19.054 Lifetime Error Log Entries: 0 00:10:19.054 Warning Temperature Time: 0 minutes 00:10:19.054 Critical Temperature Time: 0 minutes 00:10:19.054 00:10:19.054 Number of Queues 00:10:19.054 ================ 00:10:19.054 Number of I/O Submission Queues: 64 00:10:19.054 Number of I/O Completion Queues: 64 00:10:19.054 00:10:19.054 ZNS Specific Controller Data 00:10:19.054 ============================ 00:10:19.054 Zone Append Size Limit: 0 00:10:19.054 00:10:19.054 00:10:19.054 Active Namespaces 00:10:19.054 ================= 00:10:19.054 Namespace ID:1 00:10:19.054 Error Recovery Timeout: Unlimited 00:10:19.054 Command Set Identifier: NVM (00h) 00:10:19.054 Deallocate: Supported 00:10:19.054 Deallocated/Unwritten Error: Supported 00:10:19.054 Deallocated Read Value: All 0x00 00:10:19.054 Deallocate in Write Zeroes: Not Supported 00:10:19.054 Deallocated Guard Field: 0xFFFF 00:10:19.054 Flush: Supported 00:10:19.054 Reservation: Not Supported 00:10:19.054 Metadata Transferred as: Separate Metadata Buffer 00:10:19.054 Namespace Sharing Capabilities: Private 00:10:19.054 Size (in LBAs): 1548666 (5GiB) 00:10:19.054 Capacity (in LBAs): 1548666 (5GiB) 00:10:19.054 Utilization (in LBAs): 1548666 (5GiB) 00:10:19.054 Thin Provisioning: Not Supported 00:10:19.054 Per-NS Atomic Units: No 00:10:19.054 Maximum Single Source Range Length: 128 00:10:19.054 Maximum Copy Length: 128 00:10:19.054 Maximum Source Range Count: 128 00:10:19.054 NGUID/EUI64 Never Reused: No 00:10:19.054 Namespace Write Protected: No 00:10:19.054 Number of LBA Formats: 8 00:10:19.054 Current LBA Format: LBA Format #07 00:10:19.054 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.054 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.054 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.054 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.054 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.054 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.054 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.054 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.054 00:10:19.054 ===================================================== 00:10:19.054 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:19.054 ===================================================== 00:10:19.054 Controller Capabilities/Features 00:10:19.054 ================================ 00:10:19.054 Vendor ID: 1b36 00:10:19.054 Subsystem Vendor ID: 1af4 00:10:19.054 Serial Number: 12341 00:10:19.054 Model Number: QEMU NVMe Ctrl 00:10:19.054 Firmware Version: 8.0.0 00:10:19.054 Recommended Arb Burst: 6 00:10:19.054 IEEE OUI Identifier: 00 54 52 00:10:19.054 Multi-path I/O 00:10:19.054 May have multiple subsystem ports: No 00:10:19.054 May have multiple controllers: No 00:10:19.054 Associated with SR-IOV VF: No 00:10:19.054 Max Data Transfer Size: 524288 00:10:19.054 Max Number of Namespaces: 256 00:10:19.054 Max Number of I/O Queues: 64 00:10:19.054 NVMe Specification Version (VS): 1.4 00:10:19.054 NVMe Specification Version (Identify): 1.4 00:10:19.054 Maximum Queue Entries: 2048 00:10:19.054 Contiguous Queues Required: Yes 00:10:19.054 Arbitration Mechanisms Supported 00:10:19.054 Weighted Round Robin: Not Supported 00:10:19.054 Vendor Specific: Not Supported 00:10:19.054 Reset Timeout: 7500 ms 00:10:19.054 Doorbell Stride: 4 bytes 00:10:19.054 NVM Subsystem Reset: Not Supported 00:10:19.054 Command Sets Supported 00:10:19.054 NVM Command Set: Supported 00:10:19.054 Boot Partition: Not Supported 00:10:19.054 Memory Page Size Minimum: 4096 bytes 00:10:19.054 Memory Page Size Maximum: 65536 bytes 00:10:19.054 Persistent Memory Region: Not Supported 00:10:19.054 Optional Asynchronous Events Supported 00:10:19.054 Namespace Attribute Notices: Supported 00:10:19.054 Firmware Activation Notices: Not Supported 00:10:19.054 ANA Change Notices: Not Supported 00:10:19.054 PLE Aggregate Log Change Notices: Not Supported 00:10:19.054 LBA Status Info Alert Notices: Not Supported 00:10:19.054 EGE Aggregate Log Change Notices: Not Supported 00:10:19.054 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.054 Zone Descriptor Change Notices: Not Supported 00:10:19.054 Discovery Log Change Notices: Not Supported 00:10:19.054 Controller Attributes 00:10:19.055 128-bit Host Identifier: Not Supported 00:10:19.055 Non-Operational Permissive Mode: Not Supported 00:10:19.055 NVM Sets: Not Supported 00:10:19.055 Read Recovery Levels: Not Supported 00:10:19.055 Endurance Groups: Not Supported 00:10:19.055 Predictable Latency Mode: Not Supported 00:10:19.055 Traffic Based Keep ALive: Not Supported 00:10:19.055 Namespace Granularity: Not Supported 00:10:19.055 SQ Associations: Not Supported 00:10:19.055 UUID List: Not Supported 00:10:19.055 Multi-Domain Subsystem: Not Supported 00:10:19.055 Fixed Capacity Management: Not Supported 00:10:19.055 Variable Capacity Management: Not Supported 00:10:19.055 Delete Endurance Group: Not Supported 00:10:19.055 Delete NVM Set: Not Supported 00:10:19.055 Extended LBA Formats Supported: Supported 00:10:19.055 Flexible Data Placement Supported: Not Supported 00:10:19.055 00:10:19.055 Controller Memory Buffer Support 00:10:19.055 ================================ 00:10:19.055 Supported: No 00:10:19.055 00:10:19.055 Persistent Memory Region Support 00:10:19.055 ================================ 00:10:19.055 Supported: No 00:10:19.055 00:10:19.055 Admin Command Set Attributes 00:10:19.055 ============================ 00:10:19.055 Security Send/Receive: Not Supported 00:10:19.055 Format NVM: Supported 00:10:19.055 Firmware Activate/Download: Not Supported 00:10:19.055 Namespace Management: Supported 00:10:19.055 Device Self-Test: Not Supported 00:10:19.055 Directives: Supported 00:10:19.055 NVMe-MI: Not Supported 00:10:19.055 Virtualization Management: Not Supported 00:10:19.055 Doorbell Buffer Config: Supported 00:10:19.055 Get LBA Status Capability: Not Supported 00:10:19.055 Command & Feature Lockdown Capability: Not Supported 00:10:19.055 Abort Command Limit: 4 00:10:19.055 Async Event Request Limit: 4 00:10:19.055 Number of Firmware Slots: N/A 00:10:19.055 Firmware Slot 1 Read-Only: N/A 00:10:19.055 Firmware Activation Without Reset: N/A 00:10:19.055 Multiple Update Detection Support: N/A 00:10:19.055 Firmware Update Granularity: No Information Provided 00:10:19.055 Per-Namespace SMART Log: Yes 00:10:19.055 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.055 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:19.055 Command Effects Log Page: Supported 00:10:19.055 Get Log Page Extended Data: Supported 00:10:19.055 Telemetry Log Pages: Not Supported 00:10:19.055 Persistent Event Log Pages: Not Supported 00:10:19.055 Supported Log Pages Log Page: May Support 00:10:19.055 Commands Supported & Effects Log Page: Not Supported 00:10:19.055 Feature Identifiers & Effects Log Page:May Support 00:10:19.055 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.055 Data Area 4 for Telemetry Log: Not Supported 00:10:19.055 Error Log Page Entries Supported: 1 00:10:19.055 Keep Alive: Not Supported 00:10:19.055 00:10:19.055 NVM Command Set Attributes 00:10:19.055 ========================== 00:10:19.055 Submission Queue Entry Size 00:10:19.055 Max: 64 00:10:19.055 Min: 64 00:10:19.055 Completion Queue Entry Size 00:10:19.055 Max: 16 00:10:19.055 Min: 16 00:10:19.055 Number of Namespaces: 256 00:10:19.055 Compare Command: Supported 00:10:19.055 Write Uncorrectable Command: Not Supported 00:10:19.055 Dataset Management Command: Supported 00:10:19.055 Write Zeroes Command: Supported 00:10:19.055 Set Features Save Field: Supported 00:10:19.055 Reservations: Not Supported 00:10:19.055 Timestamp: Supported 00:10:19.055 Copy: Supported 00:10:19.055 Volatile Write Cache: Present 00:10:19.055 Atomic Write Unit (Normal): 1 00:10:19.055 Atomic Write Unit (PFail): 1 00:10:19.055 Atomic Compare & Write Unit: 1 00:10:19.055 Fused Compare & Write: Not Supported 00:10:19.055 Scatter-Gather List 00:10:19.055 SGL Command Set: Supported 00:10:19.055 SGL Keyed: Not Supported 00:10:19.055 SGL Bit Bucket Descriptor: Not Supported 00:10:19.055 SGL Metadata Pointer: Not Supported 00:10:19.055 Oversized SGL: Not Supported 00:10:19.055 SGL Metadata Address: Not Supported 00:10:19.055 SGL Offset: Not Supported 00:10:19.055 Transport SGL Data Block: Not Supported 00:10:19.055 Replay Protected Memory Block: Not Supported 00:10:19.055 00:10:19.055 Firmware Slot Information 00:10:19.055 ========================= 00:10:19.055 Active slot: 1 00:10:19.055 Slot 1 Firmware Revision: 1.0 00:10:19.055 00:10:19.055 00:10:19.055 Commands Supported and Effects 00:10:19.055 ============================== 00:10:19.055 Admin Commands 00:10:19.055 -------------- 00:10:19.055 Delete I/O Submission Queue (00h): Supported 00:10:19.055 Create I/O Submission Queue (01h): Supported 00:10:19.055 Get Log Page (02h): Supported 00:10:19.055 Delete I/O Completion Queue (04h): Supported 00:10:19.055 Create I/O Completion Queue (05h): Supported 00:10:19.055 Identify (06h): Supported 00:10:19.055 Abort (08h): Supported 00:10:19.055 Set Features (09h): Supported 00:10:19.055 Get Features (0Ah): Supported 00:10:19.055 Asynchronous Event Request (0Ch): Supported 00:10:19.055 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.055 Directive Send (19h): Supported 00:10:19.055 Directive Receive (1Ah): Supported 00:10:19.055 Virtualization Management (1Ch): Supported 00:10:19.055 Doorbell Buffer Config (7Ch): Supported 00:10:19.055 Format NVM (80h): Supported LBA-Change 00:10:19.055 I/O Commands 00:10:19.055 ------------ 00:10:19.055 Flush (00h): Supported LBA-Change 00:10:19.055 Write (01h): Supported LBA-Change 00:10:19.055 Read (02h): Supported 00:10:19.055 Compare (05h): Supported 00:10:19.055 Write Zeroes (08h): Supported LBA-Change 00:10:19.055 Dataset Management (09h): Supported LBA-Change 00:10:19.055 Unknown (0Ch): Supported 00:10:19.055 Unknown (12h): Supported 00:10:19.055 Copy (19h): Supported LBA-Change 00:10:19.055 Unknown (1Dh): Supported LBA-Change 00:10:19.055 00:10:19.055 Error Log 00:10:19.055 ========= 00:10:19.055 00:10:19.055 Arbitration 00:10:19.055 =========== 00:10:19.055 Arbitration Burst: no limit 00:10:19.055 00:10:19.055 Power Management 00:10:19.055 ================ 00:10:19.055 Number of Power States: 1 00:10:19.055 Current Power State: Power State #0 00:10:19.055 Power State #0: 00:10:19.055 Max Power: 25.00 W 00:10:19.055 Non-Operational State: Operational 00:10:19.055 Entry Latency: 16 microseconds 00:10:19.055 Exit Latency: 4 microseconds 00:10:19.055 Relative Read Throughput: 0 00:10:19.055 Relative Read Latency: 0 00:10:19.055 Relative Write Throughput: 0 00:10:19.055 Relative Write Latency: 0 00:10:19.055 Idle Power: Not Reported 00:10:19.055 Active Power: Not Reported 00:10:19.055 Non-Operational Permissive Mode: Not Supported 00:10:19.055 00:10:19.055 Health Information 00:10:19.055 ================== 00:10:19.055 Critical Warnings: 00:10:19.055 Available Spare Space: OK 00:10:19.055 Temperature: OK 00:10:19.055 Device Reliability: OK 00:10:19.055 Read Only: No 00:10:19.055 Volatile Memory Backup: OK 00:10:19.055 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.055 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.055 Available Spare: 0% 00:10:19.055 Available Spare Threshold: 0% 00:10:19.055 Life Percentage Used: 0% 00:10:19.055 Data Units Read: 1167 00:10:19.055 Data Units Written: 543 00:10:19.055 Host Read Commands: 60027 00:10:19.055 Host Write Commands: 29558 00:10:19.055 Controller Busy Time: 0 minutes 00:10:19.055 Power Cycles: 0 00:10:19.055 Power On Hours: 0 hours 00:10:19.055 Unsafe Shutdowns: 0 00:10:19.055 Unrecoverable Media Errors: 0 00:10:19.055 Lifetime Error Log Entries: 0 00:10:19.055 Warning Temperature Time: 0 minutes 00:10:19.055 Critical Temperature Time: 0 minutes 00:10:19.055 00:10:19.055 Number of Queues 00:10:19.055 ================ 00:10:19.055 Number of I/O Submission Queues: 64 00:10:19.055 Number of I/O Completion Queues: 64 00:10:19.055 00:10:19.055 ZNS Specific Controller Data 00:10:19.055 ============================ 00:10:19.055 Zone Append Size Limit: 0 00:10:19.055 00:10:19.055 00:10:19.055 Active Namespaces 00:10:19.055 ================= 00:10:19.055 Namespace ID:1 00:10:19.055 Error Recovery Timeout: Unlimited 00:10:19.055 Command Set Identifier: NVM (00h) 00:10:19.055 Deallocate: Supported 00:10:19.055 Deallocated/Unwritten Error: Supported 00:10:19.055 Deallocated Read Value: All 0x00 00:10:19.055 Deallocate in Write Zeroes: Not Supported 00:10:19.055 Deallocated Guard Field: 0xFFFF 00:10:19.055 Flush: Supported 00:10:19.055 Reservation: Not Supported 00:10:19.055 Namespace Sharing Capabilities: Private 00:10:19.055 Size (in LBAs): 1310720 (5GiB) 00:10:19.055 Capacity (in LBAs): 1310720 (5GiB) 00:10:19.055 Utilization (in LBAs): 1310720 (5GiB) 00:10:19.055 Thin Provisioning: Not Supported 00:10:19.055 Per-NS Atomic Units: No 00:10:19.055 Maximum Single Source Range Length: 128 00:10:19.055 Maximum Copy Length: 128 00:10:19.055 Maximum Source Range Count: 128 00:10:19.055 NGUID/EUI64 Never Reused: No 00:10:19.056 Namespace Write Protected: No 00:10:19.056 Number of LBA Formats: 8 00:10:19.056 Current LBA Format: LBA Format #04 00:10:19.056 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.056 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.056 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.056 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.056 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.056 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.056 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.056 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.056 00:10:19.056 ===================================================== 00:10:19.056 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:19.056 ===================================================== 00:10:19.056 Controller Capabilities/Features 00:10:19.056 ================================ 00:10:19.056 Vendor ID: 1b36 00:10:19.056 Subsystem Vendor ID: 1af4 00:10:19.056 Serial Number: 12343 00:10:19.056 Model Number: QEMU NVMe Ctrl 00:10:19.056 Firmware Version: 8.0.0 00:10:19.056 Recommended Arb Burst: 6 00:10:19.056 IEEE OUI Identifier: 00 54 52 00:10:19.056 Multi-path I/O 00:10:19.056 May have multiple subsystem ports: No 00:10:19.056 May have multiple controllers: Yes 00:10:19.056 Associated with SR-IOV VF: No 00:10:19.056 Max Data Transfer Size: 524288 00:10:19.056 Max Number of Namespaces: 256 00:10:19.056 Max Number of I/O Queues: 64 00:10:19.056 NVMe Specification Version (VS): 1.4 00:10:19.056 NVMe Specification Version (Identify): 1.4 00:10:19.056 Maximum Queue Entries: 2048 00:10:19.056 Contiguous Queues Required: Yes 00:10:19.056 Arbitration Mechanisms Supported 00:10:19.056 Weighted Round Robin: Not Supported 00:10:19.056 Vendor Specific: Not Supported 00:10:19.056 Reset Timeout: 7500 ms 00:10:19.056 Doorbell Stride: 4 bytes 00:10:19.056 NVM Subsystem Reset: Not Supported 00:10:19.056 Command Sets Supported 00:10:19.056 NVM Command Set: Supported 00:10:19.056 Boot Partition: Not Supported 00:10:19.056 Memory Page Size Minimum: 4096 bytes 00:10:19.056 Memory Page Size Maximum: 65536 bytes 00:10:19.056 Persistent Memory Region: Not Supported 00:10:19.056 Optional Asynchronous Events Supported 00:10:19.056 Namespace Attribute Notices: Supported 00:10:19.056 Firmware Activation Notices: Not Supported 00:10:19.056 ANA Change Notices: Not Supported 00:10:19.056 PLE Aggregate Log Change Notices: Not Supported 00:10:19.056 LBA Status Info Alert Notices: Not Supported 00:10:19.056 EGE Aggregate Log Change Notices: Not Supported 00:10:19.056 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.056 Zone Descriptor Change Notices: Not Supported 00:10:19.056 Discovery Log Change Notices: Not Supported 00:10:19.056 Controller Attributes 00:10:19.056 128-bit Host Identifier: Not Supported 00:10:19.056 Non-Operational Permissive Mode: Not Supported 00:10:19.056 NVM Sets: Not Supported 00:10:19.056 Read Recovery Levels: Not Supported 00:10:19.056 Endurance Groups: Supported 00:10:19.056 Predictable Latency Mode: Not Supported 00:10:19.056 Traffic Based Keep ALive: Not Supported 00:10:19.056 Namespace Granularity: Not Supported 00:10:19.056 SQ Associations: Not Supported 00:10:19.056 UUID List: Not Supported 00:10:19.056 Multi-Domain Subsystem: Not Supported 00:10:19.056 Fixed Capacity Management: Not Supported 00:10:19.056 Variable Capacity Management: Not Supported 00:10:19.056 Delete Endurance Group: Not Supported 00:10:19.056 Delete NVM Set: Not Supported 00:10:19.056 Extended LBA Formats Supported: Supported 00:10:19.056 Flexible Data Placement Supported: Supported 00:10:19.056 00:10:19.056 Controller Memory Buffer Support 00:10:19.056 ================================ 00:10:19.056 Supported: No 00:10:19.056 00:10:19.056 Persistent Memory Region Support 00:10:19.056 ================================ 00:10:19.056 Supported: No 00:10:19.056 00:10:19.056 Admin Command Set Attributes 00:10:19.056 ============================ 00:10:19.056 Security Send/Receive: Not Supported 00:10:19.056 Format NVM: Supported 00:10:19.056 Firmware Activate/Download: Not Supported 00:10:19.056 Namespace Management: Supported 00:10:19.056 Device Self-Test: Not Supported 00:10:19.056 Directives: Supported 00:10:19.056 NVMe-MI: Not Supported 00:10:19.056 Virtualization Management: Not Supported 00:10:19.056 Doorbell Buffer Config: Supported 00:10:19.056 Get LBA Status Capability: Not Supported 00:10:19.056 Command & Feature Lockdown Capability: Not Supported 00:10:19.056 Abort Command Limit: 4 00:10:19.056 Async Event Request Limit: 4 00:10:19.056 Number of Firmware Slots: N/A 00:10:19.056 Firmware Slot 1 Read-Only: N/A 00:10:19.056 Firmware Activation Without Reset: N/A 00:10:19.056 Multiple Update Detection Support: N/A 00:10:19.056 Firmware Update Granularity: No Information Provided 00:10:19.056 Per-Namespace SMART Log: Yes 00:10:19.056 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.056 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:19.056 Command Effects Log Page: Supported 00:10:19.056 Get Log Page Extended Data: Supported 00:10:19.056 Telemetry Log Pages: Not Supported 00:10:19.056 Persistent Event Log Pages: Not Supported 00:10:19.056 Supported Log Pages Log Page: May Support 00:10:19.056 Commands Supported & Effects Log Page: Not Supported 00:10:19.056 Feature Identifiers & Effects Log Page:May Support 00:10:19.056 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.056 Data Area 4 for Telemetry Log: Not Supported 00:10:19.056 Error Log Page Entries Supported: 1 00:10:19.056 Keep Alive: Not Supported 00:10:19.056 00:10:19.056 NVM Command Set Attributes 00:10:19.056 ========================== 00:10:19.056 Submission Queue Entry Size 00:10:19.056 Max: 64 00:10:19.056 Min: 64 00:10:19.056 Completion Queue Entry Size 00:10:19.056 Max: 16 00:10:19.056 Min: 16 00:10:19.056 Number of Namespaces: 256 00:10:19.056 Compare Command: Supported 00:10:19.056 Write Uncorrectable Command: Not Supported 00:10:19.056 Dataset Management Command: Supported 00:10:19.056 Write Zeroes Command: Supported 00:10:19.056 Set Features Save Field: Supported 00:10:19.056 Reservations: Not Supported 00:10:19.056 Timestamp: Supported 00:10:19.056 Copy: Supported 00:10:19.056 Volatile Write Cache: Present 00:10:19.056 Atomic Write Unit (Normal): 1 00:10:19.056 Atomic Write Unit (PFail): 1 00:10:19.056 Atomic Compare & Write Unit: 1 00:10:19.056 Fused Compare & Write: Not Supported 00:10:19.056 Scatter-Gather List 00:10:19.056 SGL Command Set: Supported 00:10:19.056 SGL Keyed: Not Supported 00:10:19.056 SGL Bit Bucket Descriptor: Not Supported 00:10:19.056 SGL Metadata Pointer: Not Supported 00:10:19.056 Oversized SGL: Not Supported 00:10:19.056 SGL Metadata Address: Not Supported 00:10:19.056 SGL Offset: Not Supported 00:10:19.056 Transport SGL Data Block: Not Supported 00:10:19.056 Replay Protected Memory Block: Not Supported 00:10:19.056 00:10:19.056 Firmware Slot Information 00:10:19.056 ========================= 00:10:19.056 Active slot: 1 00:10:19.056 Slot 1 Firmware Revision: 1.0 00:10:19.056 00:10:19.056 00:10:19.056 Commands Supported and Effects 00:10:19.056 ============================== 00:10:19.056 Admin Commands 00:10:19.056 -------------- 00:10:19.056 Delete I/O Submission Queue (00h): Supported 00:10:19.056 Create I/O Submission Queue (01h): Supported 00:10:19.056 Get Log Page (02h): Supported 00:10:19.056 Delete I/O Completion Queue (04h): Supported 00:10:19.056 Create I/O Completion Queue (05h): Supported 00:10:19.056 Identify (06h): Supported 00:10:19.056 Abort (08h): Supported 00:10:19.056 Set Features (09h): Supported 00:10:19.056 Get Features (0Ah): Supported 00:10:19.056 Asynchronous Event Request (0Ch): Supported 00:10:19.056 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.056 Directive Send (19h): Supported 00:10:19.056 Directive Receive (1Ah): Supported 00:10:19.056 Virtualization Management (1Ch): Supported 00:10:19.056 Doorbell Buffer Config (7Ch): Supported 00:10:19.056 Format NVM (80h): Supported LBA-Change 00:10:19.056 I/O Commands 00:10:19.056 ------------ 00:10:19.056 Flush (00h): Supported LBA-Change 00:10:19.056 Write (01h): Supported LBA-Change 00:10:19.056 Read (02h): Supported 00:10:19.056 Compare (05h): Supported 00:10:19.056 Write Zeroes (08h): Supported LBA-Change 00:10:19.056 Dataset Management (09h): Supported LBA-Change 00:10:19.056 Unknown (0Ch): Supported 00:10:19.056 Unknown (12h): Supported 00:10:19.056 Copy (19h): Supported LBA-Change 00:10:19.056 Unknown (1Dh): Supported LBA-Change 00:10:19.056 00:10:19.056 Error Log 00:10:19.056 ========= 00:10:19.056 00:10:19.056 Arbitration 00:10:19.056 =========== 00:10:19.056 Arbitration Burst: no limit 00:10:19.056 00:10:19.056 Power Management 00:10:19.056 ================ 00:10:19.056 Number of Power States: 1 00:10:19.056 Current Power State: Power State #0 00:10:19.056 Power State #0: 00:10:19.056 Max Power: 25.00 W 00:10:19.056 Non-Operational State: Operational 00:10:19.057 Entry Latency: 16 microseconds 00:10:19.057 Exit Latency: 4 microseconds 00:10:19.057 Relative Read Throughput: 0 00:10:19.057 Relative Read Latency: 0 00:10:19.057 Relative Write Throughput: 0 00:10:19.057 Relative Write Latency: 0 00:10:19.057 Idle Power: Not Reported 00:10:19.057 Active Power: Not Reported 00:10:19.057 Non-Operational Permissive Mode: Not Supported 00:10:19.057 00:10:19.057 Health Information 00:10:19.057 ================== 00:10:19.057 Critical Warnings: 00:10:19.057 Available Spare Space: OK 00:10:19.057 Temperature: OK 00:10:19.057 Device Reliability: OK 00:10:19.057 Read Only: No 00:10:19.057 Volatile Memory Backup: OK 00:10:19.057 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.057 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.057 Available Spare: 0% 00:10:19.057 Available Spare Threshold: 0% 00:10:19.057 Life Percentage Used: 0% 00:10:19.057 Data Units Read: 1239 00:10:19.057 Data Units Written: 575 00:10:19.057 Host Read Commands: 60785 00:10:19.057 Host Write Commands: 29920 00:10:19.057 Controller Busy Time: 0 minutes 00:10:19.057 Power Cycles: 0 00:10:19.057 Power On Hours: 0 hours 00:10:19.057 Unsafe Shutdowns: 0 00:10:19.057 Unrecoverable Media Errors: 0 00:10:19.057 Lifetime Error Log Entries: 0 00:10:19.057 Warning Temperature Time: 0 minutes 00:10:19.057 Critical Temperature Time: 0 minutes 00:10:19.057 00:10:19.057 Number of Queues 00:10:19.057 ================ 00:10:19.057 Number of I/O Submission Queues: 64 00:10:19.057 Number of I/O Completion Queues: 64 00:10:19.057 00:10:19.057 ZNS Specific Controller Data 00:10:19.057 ============================ 00:10:19.057 Zone Append Size Limit: 0 00:10:19.057 00:10:19.057 00:10:19.057 Active Namespaces 00:10:19.057 ================= 00:10:19.057 Namespace ID:1 00:10:19.057 Error Recovery Timeout: Unlimited 00:10:19.057 Command Set Identifier: NVM (00h) 00:10:19.057 Deallocate: Supported 00:10:19.057 Deallocated/Unwritten Error: Supported 00:10:19.057 Deallocated Read Value: All 0x00 00:10:19.057 Deallocate in Write Zeroes: Not Supported 00:10:19.057 Deallocated Guard Field: 0xFFFF 00:10:19.057 Flush: Supported 00:10:19.057 Reservation: Not Supported 00:10:19.057 Namespace Sharing Capabilities: Multiple Controllers 00:10:19.057 Size (in LBAs): 262144 (1GiB) 00:10:19.057 Capacity (in LBAs): 262144 (1GiB) 00:10:19.057 Utilization (in LBAs): 262144 (1GiB) 00:10:19.057 Thin Provisioning: Not Supported 00:10:19.057 Per-NS Atomic Units: No 00:10:19.057 Maximum Single Source Range Length: 128 00:10:19.057 Maximum Copy Length: 128 00:10:19.057 Maximum Source Range Count: 128 00:10:19.057 NGUID/EUI64 Never Reused: No 00:10:19.057 Namespace Write Protected: No 00:10:19.057 Endurance group ID: 1 00:10:19.057 Number of LBA Formats: 8 00:10:19.057 Current LBA Format: LBA Format #04 00:10:19.057 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.057 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.057 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.057 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.057 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.057 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.057 LBA Format #06: Data Si[2024-07-13 21:01:32.794337] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 64019 terminated unexpected 00:10:19.057 [2024-07-13 21:01:32.796224] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 64019 terminated unexpected 00:10:19.057 ze: 4096 Metadata Size: 16 00:10:19.057 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.057 00:10:19.057 Get Feature FDP: 00:10:19.057 ================ 00:10:19.057 Enabled: Yes 00:10:19.057 FDP configuration index: 0 00:10:19.057 00:10:19.057 FDP configurations log page 00:10:19.057 =========================== 00:10:19.057 Number of FDP configurations: 1 00:10:19.057 Version: 0 00:10:19.057 Size: 112 00:10:19.057 FDP Configuration Descriptor: 0 00:10:19.057 Descriptor Size: 96 00:10:19.057 Reclaim Group Identifier format: 2 00:10:19.057 FDP Volatile Write Cache: Not Present 00:10:19.057 FDP Configuration: Valid 00:10:19.057 Vendor Specific Size: 0 00:10:19.057 Number of Reclaim Groups: 2 00:10:19.057 Number of Recalim Unit Handles: 8 00:10:19.057 Max Placement Identifiers: 128 00:10:19.057 Number of Namespaces Suppprted: 256 00:10:19.057 Reclaim unit Nominal Size: 6000000 bytes 00:10:19.057 Estimated Reclaim Unit Time Limit: Not Reported 00:10:19.057 RUH Desc #000: RUH Type: Initially Isolated 00:10:19.057 RUH Desc #001: RUH Type: Initially Isolated 00:10:19.057 RUH Desc #002: RUH Type: Initially Isolated 00:10:19.057 RUH Desc #003: RUH Type: Initially Isolated 00:10:19.057 RUH Desc #004: RUH Type: Initially Isolated 00:10:19.057 RUH Desc #005: RUH Type: Initially Isolated 00:10:19.057 RUH Desc #006: RUH Type: Initially Isolated 00:10:19.057 RUH Desc #007: RUH Type: Initially Isolated 00:10:19.057 00:10:19.057 FDP reclaim unit handle usage log page 00:10:19.057 ====================================== 00:10:19.057 Number of Reclaim Unit Handles: 8 00:10:19.057 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:19.057 RUH Usage Desc #001: RUH Attributes: Unused 00:10:19.057 RUH Usage Desc #002: RUH Attributes: Unused 00:10:19.057 RUH Usage Desc #003: RUH Attributes: Unused 00:10:19.057 RUH Usage Desc #004: RUH Attributes: Unused 00:10:19.057 RUH Usage Desc #005: RUH Attributes: Unused 00:10:19.057 RUH Usage Desc #006: RUH Attributes: Unused 00:10:19.057 RUH Usage Desc #007: RUH Attributes: Unused 00:10:19.057 00:10:19.057 FDP statistics log page 00:10:19.057 ======================= 00:10:19.057 Host bytes with metadata written: 372441088 00:10:19.057 Media bytes with metadata written: 372572160 00:10:19.057 Media bytes erased: 0 00:10:19.057 00:10:19.057 FDP events log page 00:10:19.057 =================== 00:10:19.057 Number of FDP events: 0 00:10:19.057 00:10:19.057 ===================================================== 00:10:19.057 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:19.057 ===================================================== 00:10:19.057 Controller Capabilities/Features 00:10:19.057 ================================ 00:10:19.057 Vendor ID: 1b36 00:10:19.057 Subsystem Vendor ID: 1af4 00:10:19.057 Serial Number: 12342 00:10:19.057 Model Number: QEMU NVMe Ctrl 00:10:19.057 Firmware Version: 8.0.0 00:10:19.057 Recommended Arb Burst: 6 00:10:19.057 IEEE OUI Identifier: 00 54 52 00:10:19.057 Multi-path I/O 00:10:19.057 May have multiple subsystem ports: No 00:10:19.057 May have multiple controllers: No 00:10:19.057 Associated with SR-IOV VF: No 00:10:19.057 Max Data Transfer Size: 524288 00:10:19.057 Max Number of Namespaces: 256 00:10:19.057 Max Number of I/O Queues: 64 00:10:19.057 NVMe Specification Version (VS): 1.4 00:10:19.057 NVMe Specification Version (Identify): 1.4 00:10:19.057 Maximum Queue Entries: 2048 00:10:19.057 Contiguous Queues Required: Yes 00:10:19.057 Arbitration Mechanisms Supported 00:10:19.057 Weighted Round Robin: Not Supported 00:10:19.057 Vendor Specific: Not Supported 00:10:19.057 Reset Timeout: 7500 ms 00:10:19.057 Doorbell Stride: 4 bytes 00:10:19.057 NVM Subsystem Reset: Not Supported 00:10:19.057 Command Sets Supported 00:10:19.057 NVM Command Set: Supported 00:10:19.057 Boot Partition: Not Supported 00:10:19.057 Memory Page Size Minimum: 4096 bytes 00:10:19.057 Memory Page Size Maximum: 65536 bytes 00:10:19.057 Persistent Memory Region: Not Supported 00:10:19.057 Optional Asynchronous Events Supported 00:10:19.057 Namespace Attribute Notices: Supported 00:10:19.057 Firmware Activation Notices: Not Supported 00:10:19.057 ANA Change Notices: Not Supported 00:10:19.057 PLE Aggregate Log Change Notices: Not Supported 00:10:19.057 LBA Status Info Alert Notices: Not Supported 00:10:19.057 EGE Aggregate Log Change Notices: Not Supported 00:10:19.057 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.057 Zone Descriptor Change Notices: Not Supported 00:10:19.057 Discovery Log Change Notices: Not Supported 00:10:19.057 Controller Attributes 00:10:19.057 128-bit Host Identifier: Not Supported 00:10:19.057 Non-Operational Permissive Mode: Not Supported 00:10:19.057 NVM Sets: Not Supported 00:10:19.057 Read Recovery Levels: Not Supported 00:10:19.057 Endurance Groups: Not Supported 00:10:19.057 Predictable Latency Mode: Not Supported 00:10:19.057 Traffic Based Keep ALive: Not Supported 00:10:19.057 Namespace Granularity: Not Supported 00:10:19.057 SQ Associations: Not Supported 00:10:19.057 UUID List: Not Supported 00:10:19.057 Multi-Domain Subsystem: Not Supported 00:10:19.057 Fixed Capacity Management: Not Supported 00:10:19.057 Variable Capacity Management: Not Supported 00:10:19.057 Delete Endurance Group: Not Supported 00:10:19.057 Delete NVM Set: Not Supported 00:10:19.057 Extended LBA Formats Supported: Supported 00:10:19.057 Flexible Data Placement Supported: Not Supported 00:10:19.057 00:10:19.057 Controller Memory Buffer Support 00:10:19.057 ================================ 00:10:19.057 Supported: No 00:10:19.057 00:10:19.057 Persistent Memory Region Support 00:10:19.057 ================================ 00:10:19.057 Supported: No 00:10:19.057 00:10:19.057 Admin Command Set Attributes 00:10:19.057 ============================ 00:10:19.057 Security Send/Receive: Not Supported 00:10:19.058 Format NVM: Supported 00:10:19.058 Firmware Activate/Download: Not Supported 00:10:19.058 Namespace Management: Supported 00:10:19.058 Device Self-Test: Not Supported 00:10:19.058 Directives: Supported 00:10:19.058 NVMe-MI: Not Supported 00:10:19.058 Virtualization Management: Not Supported 00:10:19.058 Doorbell Buffer Config: Supported 00:10:19.058 Get LBA Status Capability: Not Supported 00:10:19.058 Command & Feature Lockdown Capability: Not Supported 00:10:19.058 Abort Command Limit: 4 00:10:19.058 Async Event Request Limit: 4 00:10:19.058 Number of Firmware Slots: N/A 00:10:19.058 Firmware Slot 1 Read-Only: N/A 00:10:19.058 Firmware Activation Without Reset: N/A 00:10:19.058 Multiple Update Detection Support: N/A 00:10:19.058 Firmware Update Granularity: No Information Provided 00:10:19.058 Per-Namespace SMART Log: Yes 00:10:19.058 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.058 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:19.058 Command Effects Log Page: Supported 00:10:19.058 Get Log Page Extended Data: Supported 00:10:19.058 Telemetry Log Pages: Not Supported 00:10:19.058 Persistent Event Log Pages: Not Supported 00:10:19.058 Supported Log Pages Log Page: May Support 00:10:19.058 Commands Supported & Effects Log Page: Not Supported 00:10:19.058 Feature Identifiers & Effects Log Page:May Support 00:10:19.058 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.058 Data Area 4 for Telemetry Log: Not Supported 00:10:19.058 Error Log Page Entries Supported: 1 00:10:19.058 Keep Alive: Not Supported 00:10:19.058 00:10:19.058 NVM Command Set Attributes 00:10:19.058 ========================== 00:10:19.058 Submission Queue Entry Size 00:10:19.058 Max: 64 00:10:19.058 Min: 64 00:10:19.058 Completion Queue Entry Size 00:10:19.058 Max: 16 00:10:19.058 Min: 16 00:10:19.058 Number of Namespaces: 256 00:10:19.058 Compare Command: Supported 00:10:19.058 Write Uncorrectable Command: Not Supported 00:10:19.058 Dataset Management Command: Supported 00:10:19.058 Write Zeroes Command: Supported 00:10:19.058 Set Features Save Field: Supported 00:10:19.058 Reservations: Not Supported 00:10:19.058 Timestamp: Supported 00:10:19.058 Copy: Supported 00:10:19.058 Volatile Write Cache: Present 00:10:19.058 Atomic Write Unit (Normal): 1 00:10:19.058 Atomic Write Unit (PFail): 1 00:10:19.058 Atomic Compare & Write Unit: 1 00:10:19.058 Fused Compare & Write: Not Supported 00:10:19.058 Scatter-Gather List 00:10:19.058 SGL Command Set: Supported 00:10:19.058 SGL Keyed: Not Supported 00:10:19.058 SGL Bit Bucket Descriptor: Not Supported 00:10:19.058 SGL Metadata Pointer: Not Supported 00:10:19.058 Oversized SGL: Not Supported 00:10:19.058 SGL Metadata Address: Not Supported 00:10:19.058 SGL Offset: Not Supported 00:10:19.058 Transport SGL Data Block: Not Supported 00:10:19.058 Replay Protected Memory Block: Not Supported 00:10:19.058 00:10:19.058 Firmware Slot Information 00:10:19.058 ========================= 00:10:19.058 Active slot: 1 00:10:19.058 Slot 1 Firmware Revision: 1.0 00:10:19.058 00:10:19.058 00:10:19.058 Commands Supported and Effects 00:10:19.058 ============================== 00:10:19.058 Admin Commands 00:10:19.058 -------------- 00:10:19.058 Delete I/O Submission Queue (00h): Supported 00:10:19.058 Create I/O Submission Queue (01h): Supported 00:10:19.058 Get Log Page (02h): Supported 00:10:19.058 Delete I/O Completion Queue (04h): Supported 00:10:19.058 Create I/O Completion Queue (05h): Supported 00:10:19.058 Identify (06h): Supported 00:10:19.058 Abort (08h): Supported 00:10:19.058 Set Features (09h): Supported 00:10:19.058 Get Features (0Ah): Supported 00:10:19.058 Asynchronous Event Request (0Ch): Supported 00:10:19.058 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.058 Directive Send (19h): Supported 00:10:19.058 Directive Receive (1Ah): Supported 00:10:19.058 Virtualization Management (1Ch): Supported 00:10:19.058 Doorbell Buffer Config (7Ch): Supported 00:10:19.058 Format NVM (80h): Supported LBA-Change 00:10:19.058 I/O Commands 00:10:19.058 ------------ 00:10:19.058 Flush (00h): Supported LBA-Change 00:10:19.058 Write (01h): Supported LBA-Change 00:10:19.058 Read (02h): Supported 00:10:19.058 Compare (05h): Supported 00:10:19.058 Write Zeroes (08h): Supported LBA-Change 00:10:19.058 Dataset Management (09h): Supported LBA-Change 00:10:19.058 Unknown (0Ch): Supported 00:10:19.058 Unknown (12h): Supported 00:10:19.058 Copy (19h): Supported LBA-Change 00:10:19.058 Unknown (1Dh): Supported LBA-Change 00:10:19.058 00:10:19.058 Error Log 00:10:19.058 ========= 00:10:19.058 00:10:19.058 Arbitration 00:10:19.058 =========== 00:10:19.058 Arbitration Burst: no limit 00:10:19.058 00:10:19.058 Power Management 00:10:19.058 ================ 00:10:19.058 Number of Power States: 1 00:10:19.058 Current Power State: Power State #0 00:10:19.058 Power State #0: 00:10:19.058 Max Power: 25.00 W 00:10:19.058 Non-Operational State: Operational 00:10:19.058 Entry Latency: 16 microseconds 00:10:19.058 Exit Latency: 4 microseconds 00:10:19.058 Relative Read Throughput: 0 00:10:19.058 Relative Read Latency: 0 00:10:19.058 Relative Write Throughput: 0 00:10:19.058 Relative Write Latency: 0 00:10:19.058 Idle Power: Not Reported 00:10:19.058 Active Power: Not Reported 00:10:19.058 Non-Operational Permissive Mode: Not Supported 00:10:19.058 00:10:19.058 Health Information 00:10:19.058 ================== 00:10:19.058 Critical Warnings: 00:10:19.058 Available Spare Space: OK 00:10:19.058 Temperature: OK 00:10:19.058 Device Reliability: OK 00:10:19.058 Read Only: No 00:10:19.058 Volatile Memory Backup: OK 00:10:19.058 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.058 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.058 Available Spare: 0% 00:10:19.058 Available Spare Threshold: 0% 00:10:19.058 Life Percentage Used: 0% 00:10:19.058 Data Units Read: 3612 00:10:19.058 Data Units Written: 1671 00:10:19.058 Host Read Commands: 181624 00:10:19.058 Host Write Commands: 89309 00:10:19.058 Controller Busy Time: 0 minutes 00:10:19.058 Power Cycles: 0 00:10:19.058 Power On Hours: 0 hours 00:10:19.058 Unsafe Shutdowns: 0 00:10:19.058 Unrecoverable Media Errors: 0 00:10:19.058 Lifetime Error Log Entries: 0 00:10:19.058 Warning Temperature Time: 0 minutes 00:10:19.058 Critical Temperature Time: 0 minutes 00:10:19.058 00:10:19.058 Number of Queues 00:10:19.058 ================ 00:10:19.058 Number of I/O Submission Queues: 64 00:10:19.058 Number of I/O Completion Queues: 64 00:10:19.058 00:10:19.058 ZNS Specific Controller Data 00:10:19.058 ============================ 00:10:19.058 Zone Append Size Limit: 0 00:10:19.058 00:10:19.058 00:10:19.058 Active Namespaces 00:10:19.058 ================= 00:10:19.058 Namespace ID:1 00:10:19.058 Error Recovery Timeout: Unlimited 00:10:19.058 Command Set Identifier: NVM (00h) 00:10:19.058 Deallocate: Supported 00:10:19.058 Deallocated/Unwritten Error: Supported 00:10:19.058 Deallocated Read Value: All 0x00 00:10:19.058 Deallocate in Write Zeroes: Not Supported 00:10:19.058 Deallocated Guard Field: 0xFFFF 00:10:19.058 Flush: Supported 00:10:19.058 Reservation: Not Supported 00:10:19.058 Namespace Sharing Capabilities: Private 00:10:19.058 Size (in LBAs): 1048576 (4GiB) 00:10:19.058 Capacity (in LBAs): 1048576 (4GiB) 00:10:19.058 Utilization (in LBAs): 1048576 (4GiB) 00:10:19.058 Thin Provisioning: Not Supported 00:10:19.058 Per-NS Atomic Units: No 00:10:19.058 Maximum Single Source Range Length: 128 00:10:19.059 Maximum Copy Length: 128 00:10:19.059 Maximum Source Range Count: 128 00:10:19.059 NGUID/EUI64 Never Reused: No 00:10:19.059 Namespace Write Protected: No 00:10:19.059 Number of LBA Formats: 8 00:10:19.059 Current LBA Format: LBA Format #04 00:10:19.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.059 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.059 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.059 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.059 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.059 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.059 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.059 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.059 00:10:19.059 Namespace ID:2 00:10:19.059 Error Recovery Timeout: Unlimited 00:10:19.059 Command Set Identifier: NVM (00h) 00:10:19.059 Deallocate: Supported 00:10:19.059 Deallocated/Unwritten Error: Supported 00:10:19.059 Deallocated Read Value: All 0x00 00:10:19.059 Deallocate in Write Zeroes: Not Supported 00:10:19.059 Deallocated Guard Field: 0xFFFF 00:10:19.059 Flush: Supported 00:10:19.059 Reservation: Not Supported 00:10:19.059 Namespace Sharing Capabilities: Private 00:10:19.059 Size (in LBAs): 1048576 (4GiB) 00:10:19.059 Capacity (in LBAs): 1048576 (4GiB) 00:10:19.059 Utilization (in LBAs): 1048576 (4GiB) 00:10:19.059 Thin Provisioning: Not Supported 00:10:19.059 Per-NS Atomic Units: No 00:10:19.059 Maximum Single Source Range Length: 128 00:10:19.059 Maximum Copy Length: 128 00:10:19.059 Maximum Source Range Count: 128 00:10:19.059 NGUID/EUI64 Never Reused: No 00:10:19.059 Namespace Write Protected: No 00:10:19.059 Number of LBA Formats: 8 00:10:19.059 Current LBA Format: LBA Format #04 00:10:19.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.059 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.059 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.059 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.059 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.059 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.059 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.059 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.059 00:10:19.059 Namespace ID:3 00:10:19.059 Error Recovery Timeout: Unlimited 00:10:19.059 Command Set Identifier: NVM (00h) 00:10:19.059 Deallocate: Supported 00:10:19.059 Deallocated/Unwritten Error: Supported 00:10:19.059 Deallocated Read Value: All 0x00 00:10:19.059 Deallocate in Write Zeroes: Not Supported 00:10:19.059 Deallocated Guard Field: 0xFFFF 00:10:19.059 Flush: Supported 00:10:19.059 Reservation: Not Supported 00:10:19.059 Namespace Sharing Capabilities: Private 00:10:19.059 Size (in LBAs): 1048576 (4GiB) 00:10:19.059 Capacity (in LBAs): 1048576 (4GiB) 00:10:19.059 Utilization (in LBAs): 1048576 (4GiB) 00:10:19.059 Thin Provisioning: Not Supported 00:10:19.059 Per-NS Atomic Units: No 00:10:19.059 Maximum Single Source Range Length: 128 00:10:19.059 Maximum Copy Length: 128 00:10:19.059 Maximum Source Range Count: 128 00:10:19.059 NGUID/EUI64 Never Reused: No 00:10:19.059 Namespace Write Protected: No 00:10:19.059 Number of LBA Formats: 8 00:10:19.059 Current LBA Format: LBA Format #04 00:10:19.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.059 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.059 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.059 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.059 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.059 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.059 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.059 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.059 00:10:19.059 21:01:32 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:19.059 21:01:32 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:10:19.318 ===================================================== 00:10:19.318 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:19.318 ===================================================== 00:10:19.318 Controller Capabilities/Features 00:10:19.318 ================================ 00:10:19.318 Vendor ID: 1b36 00:10:19.318 Subsystem Vendor ID: 1af4 00:10:19.318 Serial Number: 12340 00:10:19.318 Model Number: QEMU NVMe Ctrl 00:10:19.318 Firmware Version: 8.0.0 00:10:19.318 Recommended Arb Burst: 6 00:10:19.318 IEEE OUI Identifier: 00 54 52 00:10:19.318 Multi-path I/O 00:10:19.318 May have multiple subsystem ports: No 00:10:19.318 May have multiple controllers: No 00:10:19.318 Associated with SR-IOV VF: No 00:10:19.318 Max Data Transfer Size: 524288 00:10:19.318 Max Number of Namespaces: 256 00:10:19.318 Max Number of I/O Queues: 64 00:10:19.318 NVMe Specification Version (VS): 1.4 00:10:19.318 NVMe Specification Version (Identify): 1.4 00:10:19.318 Maximum Queue Entries: 2048 00:10:19.318 Contiguous Queues Required: Yes 00:10:19.318 Arbitration Mechanisms Supported 00:10:19.318 Weighted Round Robin: Not Supported 00:10:19.318 Vendor Specific: Not Supported 00:10:19.318 Reset Timeout: 7500 ms 00:10:19.318 Doorbell Stride: 4 bytes 00:10:19.318 NVM Subsystem Reset: Not Supported 00:10:19.318 Command Sets Supported 00:10:19.318 NVM Command Set: Supported 00:10:19.318 Boot Partition: Not Supported 00:10:19.318 Memory Page Size Minimum: 4096 bytes 00:10:19.318 Memory Page Size Maximum: 65536 bytes 00:10:19.318 Persistent Memory Region: Not Supported 00:10:19.318 Optional Asynchronous Events Supported 00:10:19.318 Namespace Attribute Notices: Supported 00:10:19.318 Firmware Activation Notices: Not Supported 00:10:19.318 ANA Change Notices: Not Supported 00:10:19.318 PLE Aggregate Log Change Notices: Not Supported 00:10:19.318 LBA Status Info Alert Notices: Not Supported 00:10:19.318 EGE Aggregate Log Change Notices: Not Supported 00:10:19.318 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.318 Zone Descriptor Change Notices: Not Supported 00:10:19.318 Discovery Log Change Notices: Not Supported 00:10:19.318 Controller Attributes 00:10:19.318 128-bit Host Identifier: Not Supported 00:10:19.318 Non-Operational Permissive Mode: Not Supported 00:10:19.318 NVM Sets: Not Supported 00:10:19.318 Read Recovery Levels: Not Supported 00:10:19.318 Endurance Groups: Not Supported 00:10:19.318 Predictable Latency Mode: Not Supported 00:10:19.318 Traffic Based Keep ALive: Not Supported 00:10:19.318 Namespace Granularity: Not Supported 00:10:19.318 SQ Associations: Not Supported 00:10:19.318 UUID List: Not Supported 00:10:19.318 Multi-Domain Subsystem: Not Supported 00:10:19.318 Fixed Capacity Management: Not Supported 00:10:19.318 Variable Capacity Management: Not Supported 00:10:19.318 Delete Endurance Group: Not Supported 00:10:19.318 Delete NVM Set: Not Supported 00:10:19.318 Extended LBA Formats Supported: Supported 00:10:19.318 Flexible Data Placement Supported: Not Supported 00:10:19.318 00:10:19.318 Controller Memory Buffer Support 00:10:19.318 ================================ 00:10:19.318 Supported: No 00:10:19.318 00:10:19.318 Persistent Memory Region Support 00:10:19.318 ================================ 00:10:19.318 Supported: No 00:10:19.318 00:10:19.318 Admin Command Set Attributes 00:10:19.318 ============================ 00:10:19.318 Security Send/Receive: Not Supported 00:10:19.318 Format NVM: Supported 00:10:19.318 Firmware Activate/Download: Not Supported 00:10:19.318 Namespace Management: Supported 00:10:19.318 Device Self-Test: Not Supported 00:10:19.318 Directives: Supported 00:10:19.318 NVMe-MI: Not Supported 00:10:19.318 Virtualization Management: Not Supported 00:10:19.318 Doorbell Buffer Config: Supported 00:10:19.318 Get LBA Status Capability: Not Supported 00:10:19.318 Command & Feature Lockdown Capability: Not Supported 00:10:19.318 Abort Command Limit: 4 00:10:19.318 Async Event Request Limit: 4 00:10:19.318 Number of Firmware Slots: N/A 00:10:19.318 Firmware Slot 1 Read-Only: N/A 00:10:19.318 Firmware Activation Without Reset: N/A 00:10:19.318 Multiple Update Detection Support: N/A 00:10:19.318 Firmware Update Granularity: No Information Provided 00:10:19.318 Per-Namespace SMART Log: Yes 00:10:19.318 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.318 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:19.318 Command Effects Log Page: Supported 00:10:19.318 Get Log Page Extended Data: Supported 00:10:19.318 Telemetry Log Pages: Not Supported 00:10:19.318 Persistent Event Log Pages: Not Supported 00:10:19.318 Supported Log Pages Log Page: May Support 00:10:19.318 Commands Supported & Effects Log Page: Not Supported 00:10:19.318 Feature Identifiers & Effects Log Page:May Support 00:10:19.318 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.318 Data Area 4 for Telemetry Log: Not Supported 00:10:19.318 Error Log Page Entries Supported: 1 00:10:19.318 Keep Alive: Not Supported 00:10:19.318 00:10:19.318 NVM Command Set Attributes 00:10:19.318 ========================== 00:10:19.318 Submission Queue Entry Size 00:10:19.318 Max: 64 00:10:19.318 Min: 64 00:10:19.318 Completion Queue Entry Size 00:10:19.318 Max: 16 00:10:19.318 Min: 16 00:10:19.318 Number of Namespaces: 256 00:10:19.318 Compare Command: Supported 00:10:19.318 Write Uncorrectable Command: Not Supported 00:10:19.318 Dataset Management Command: Supported 00:10:19.318 Write Zeroes Command: Supported 00:10:19.318 Set Features Save Field: Supported 00:10:19.318 Reservations: Not Supported 00:10:19.318 Timestamp: Supported 00:10:19.318 Copy: Supported 00:10:19.318 Volatile Write Cache: Present 00:10:19.318 Atomic Write Unit (Normal): 1 00:10:19.318 Atomic Write Unit (PFail): 1 00:10:19.318 Atomic Compare & Write Unit: 1 00:10:19.318 Fused Compare & Write: Not Supported 00:10:19.318 Scatter-Gather List 00:10:19.318 SGL Command Set: Supported 00:10:19.318 SGL Keyed: Not Supported 00:10:19.318 SGL Bit Bucket Descriptor: Not Supported 00:10:19.318 SGL Metadata Pointer: Not Supported 00:10:19.318 Oversized SGL: Not Supported 00:10:19.318 SGL Metadata Address: Not Supported 00:10:19.318 SGL Offset: Not Supported 00:10:19.318 Transport SGL Data Block: Not Supported 00:10:19.318 Replay Protected Memory Block: Not Supported 00:10:19.318 00:10:19.318 Firmware Slot Information 00:10:19.318 ========================= 00:10:19.318 Active slot: 1 00:10:19.318 Slot 1 Firmware Revision: 1.0 00:10:19.318 00:10:19.318 00:10:19.318 Commands Supported and Effects 00:10:19.318 ============================== 00:10:19.318 Admin Commands 00:10:19.318 -------------- 00:10:19.318 Delete I/O Submission Queue (00h): Supported 00:10:19.318 Create I/O Submission Queue (01h): Supported 00:10:19.318 Get Log Page (02h): Supported 00:10:19.318 Delete I/O Completion Queue (04h): Supported 00:10:19.318 Create I/O Completion Queue (05h): Supported 00:10:19.318 Identify (06h): Supported 00:10:19.318 Abort (08h): Supported 00:10:19.318 Set Features (09h): Supported 00:10:19.318 Get Features (0Ah): Supported 00:10:19.318 Asynchronous Event Request (0Ch): Supported 00:10:19.318 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.318 Directive Send (19h): Supported 00:10:19.318 Directive Receive (1Ah): Supported 00:10:19.318 Virtualization Management (1Ch): Supported 00:10:19.318 Doorbell Buffer Config (7Ch): Supported 00:10:19.318 Format NVM (80h): Supported LBA-Change 00:10:19.318 I/O Commands 00:10:19.318 ------------ 00:10:19.318 Flush (00h): Supported LBA-Change 00:10:19.318 Write (01h): Supported LBA-Change 00:10:19.318 Read (02h): Supported 00:10:19.318 Compare (05h): Supported 00:10:19.318 Write Zeroes (08h): Supported LBA-Change 00:10:19.318 Dataset Management (09h): Supported LBA-Change 00:10:19.318 Unknown (0Ch): Supported 00:10:19.318 Unknown (12h): Supported 00:10:19.319 Copy (19h): Supported LBA-Change 00:10:19.319 Unknown (1Dh): Supported LBA-Change 00:10:19.319 00:10:19.319 Error Log 00:10:19.319 ========= 00:10:19.319 00:10:19.319 Arbitration 00:10:19.319 =========== 00:10:19.319 Arbitration Burst: no limit 00:10:19.319 00:10:19.319 Power Management 00:10:19.319 ================ 00:10:19.319 Number of Power States: 1 00:10:19.319 Current Power State: Power State #0 00:10:19.319 Power State #0: 00:10:19.319 Max Power: 25.00 W 00:10:19.319 Non-Operational State: Operational 00:10:19.319 Entry Latency: 16 microseconds 00:10:19.319 Exit Latency: 4 microseconds 00:10:19.319 Relative Read Throughput: 0 00:10:19.319 Relative Read Latency: 0 00:10:19.319 Relative Write Throughput: 0 00:10:19.319 Relative Write Latency: 0 00:10:19.319 Idle Power: Not Reported 00:10:19.319 Active Power: Not Reported 00:10:19.319 Non-Operational Permissive Mode: Not Supported 00:10:19.319 00:10:19.319 Health Information 00:10:19.319 ================== 00:10:19.319 Critical Warnings: 00:10:19.319 Available Spare Space: OK 00:10:19.319 Temperature: OK 00:10:19.319 Device Reliability: OK 00:10:19.319 Read Only: No 00:10:19.319 Volatile Memory Backup: OK 00:10:19.319 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.319 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.319 Available Spare: 0% 00:10:19.319 Available Spare Threshold: 0% 00:10:19.319 Life Percentage Used: 0% 00:10:19.319 Data Units Read: 1740 00:10:19.319 Data Units Written: 802 00:10:19.319 Host Read Commands: 87420 00:10:19.319 Host Write Commands: 43384 00:10:19.319 Controller Busy Time: 0 minutes 00:10:19.319 Power Cycles: 0 00:10:19.319 Power On Hours: 0 hours 00:10:19.319 Unsafe Shutdowns: 0 00:10:19.319 Unrecoverable Media Errors: 0 00:10:19.319 Lifetime Error Log Entries: 0 00:10:19.319 Warning Temperature Time: 0 minutes 00:10:19.319 Critical Temperature Time: 0 minutes 00:10:19.319 00:10:19.319 Number of Queues 00:10:19.319 ================ 00:10:19.319 Number of I/O Submission Queues: 64 00:10:19.319 Number of I/O Completion Queues: 64 00:10:19.319 00:10:19.319 ZNS Specific Controller Data 00:10:19.319 ============================ 00:10:19.319 Zone Append Size Limit: 0 00:10:19.319 00:10:19.319 00:10:19.319 Active Namespaces 00:10:19.319 ================= 00:10:19.319 Namespace ID:1 00:10:19.319 Error Recovery Timeout: Unlimited 00:10:19.319 Command Set Identifier: NVM (00h) 00:10:19.319 Deallocate: Supported 00:10:19.319 Deallocated/Unwritten Error: Supported 00:10:19.319 Deallocated Read Value: All 0x00 00:10:19.319 Deallocate in Write Zeroes: Not Supported 00:10:19.319 Deallocated Guard Field: 0xFFFF 00:10:19.319 Flush: Supported 00:10:19.319 Reservation: Not Supported 00:10:19.319 Metadata Transferred as: Separate Metadata Buffer 00:10:19.319 Namespace Sharing Capabilities: Private 00:10:19.319 Size (in LBAs): 1548666 (5GiB) 00:10:19.319 Capacity (in LBAs): 1548666 (5GiB) 00:10:19.319 Utilization (in LBAs): 1548666 (5GiB) 00:10:19.319 Thin Provisioning: Not Supported 00:10:19.319 Per-NS Atomic Units: No 00:10:19.319 Maximum Single Source Range Length: 128 00:10:19.319 Maximum Copy Length: 128 00:10:19.319 Maximum Source Range Count: 128 00:10:19.319 NGUID/EUI64 Never Reused: No 00:10:19.319 Namespace Write Protected: No 00:10:19.319 Number of LBA Formats: 8 00:10:19.319 Current LBA Format: LBA Format #07 00:10:19.319 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.319 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.319 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.319 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.319 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.319 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.319 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.319 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.319 00:10:19.319 21:01:33 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:19.319 21:01:33 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:10:19.577 ===================================================== 00:10:19.577 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:19.577 ===================================================== 00:10:19.577 Controller Capabilities/Features 00:10:19.577 ================================ 00:10:19.577 Vendor ID: 1b36 00:10:19.577 Subsystem Vendor ID: 1af4 00:10:19.577 Serial Number: 12341 00:10:19.577 Model Number: QEMU NVMe Ctrl 00:10:19.577 Firmware Version: 8.0.0 00:10:19.577 Recommended Arb Burst: 6 00:10:19.577 IEEE OUI Identifier: 00 54 52 00:10:19.577 Multi-path I/O 00:10:19.577 May have multiple subsystem ports: No 00:10:19.577 May have multiple controllers: No 00:10:19.577 Associated with SR-IOV VF: No 00:10:19.577 Max Data Transfer Size: 524288 00:10:19.577 Max Number of Namespaces: 256 00:10:19.577 Max Number of I/O Queues: 64 00:10:19.577 NVMe Specification Version (VS): 1.4 00:10:19.577 NVMe Specification Version (Identify): 1.4 00:10:19.577 Maximum Queue Entries: 2048 00:10:19.577 Contiguous Queues Required: Yes 00:10:19.577 Arbitration Mechanisms Supported 00:10:19.577 Weighted Round Robin: Not Supported 00:10:19.577 Vendor Specific: Not Supported 00:10:19.577 Reset Timeout: 7500 ms 00:10:19.577 Doorbell Stride: 4 bytes 00:10:19.577 NVM Subsystem Reset: Not Supported 00:10:19.577 Command Sets Supported 00:10:19.577 NVM Command Set: Supported 00:10:19.577 Boot Partition: Not Supported 00:10:19.577 Memory Page Size Minimum: 4096 bytes 00:10:19.577 Memory Page Size Maximum: 65536 bytes 00:10:19.577 Persistent Memory Region: Not Supported 00:10:19.577 Optional Asynchronous Events Supported 00:10:19.577 Namespace Attribute Notices: Supported 00:10:19.577 Firmware Activation Notices: Not Supported 00:10:19.577 ANA Change Notices: Not Supported 00:10:19.577 PLE Aggregate Log Change Notices: Not Supported 00:10:19.577 LBA Status Info Alert Notices: Not Supported 00:10:19.577 EGE Aggregate Log Change Notices: Not Supported 00:10:19.577 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.577 Zone Descriptor Change Notices: Not Supported 00:10:19.577 Discovery Log Change Notices: Not Supported 00:10:19.577 Controller Attributes 00:10:19.577 128-bit Host Identifier: Not Supported 00:10:19.577 Non-Operational Permissive Mode: Not Supported 00:10:19.577 NVM Sets: Not Supported 00:10:19.577 Read Recovery Levels: Not Supported 00:10:19.577 Endurance Groups: Not Supported 00:10:19.577 Predictable Latency Mode: Not Supported 00:10:19.577 Traffic Based Keep ALive: Not Supported 00:10:19.577 Namespace Granularity: Not Supported 00:10:19.577 SQ Associations: Not Supported 00:10:19.577 UUID List: Not Supported 00:10:19.577 Multi-Domain Subsystem: Not Supported 00:10:19.577 Fixed Capacity Management: Not Supported 00:10:19.577 Variable Capacity Management: Not Supported 00:10:19.577 Delete Endurance Group: Not Supported 00:10:19.577 Delete NVM Set: Not Supported 00:10:19.577 Extended LBA Formats Supported: Supported 00:10:19.577 Flexible Data Placement Supported: Not Supported 00:10:19.577 00:10:19.577 Controller Memory Buffer Support 00:10:19.577 ================================ 00:10:19.577 Supported: No 00:10:19.577 00:10:19.577 Persistent Memory Region Support 00:10:19.577 ================================ 00:10:19.577 Supported: No 00:10:19.577 00:10:19.577 Admin Command Set Attributes 00:10:19.577 ============================ 00:10:19.577 Security Send/Receive: Not Supported 00:10:19.577 Format NVM: Supported 00:10:19.577 Firmware Activate/Download: Not Supported 00:10:19.577 Namespace Management: Supported 00:10:19.577 Device Self-Test: Not Supported 00:10:19.577 Directives: Supported 00:10:19.577 NVMe-MI: Not Supported 00:10:19.577 Virtualization Management: Not Supported 00:10:19.577 Doorbell Buffer Config: Supported 00:10:19.577 Get LBA Status Capability: Not Supported 00:10:19.577 Command & Feature Lockdown Capability: Not Supported 00:10:19.577 Abort Command Limit: 4 00:10:19.577 Async Event Request Limit: 4 00:10:19.577 Number of Firmware Slots: N/A 00:10:19.577 Firmware Slot 1 Read-Only: N/A 00:10:19.577 Firmware Activation Without Reset: N/A 00:10:19.577 Multiple Update Detection Support: N/A 00:10:19.577 Firmware Update Granularity: No Information Provided 00:10:19.577 Per-Namespace SMART Log: Yes 00:10:19.577 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.577 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:19.577 Command Effects Log Page: Supported 00:10:19.577 Get Log Page Extended Data: Supported 00:10:19.577 Telemetry Log Pages: Not Supported 00:10:19.577 Persistent Event Log Pages: Not Supported 00:10:19.577 Supported Log Pages Log Page: May Support 00:10:19.577 Commands Supported & Effects Log Page: Not Supported 00:10:19.577 Feature Identifiers & Effects Log Page:May Support 00:10:19.577 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.577 Data Area 4 for Telemetry Log: Not Supported 00:10:19.577 Error Log Page Entries Supported: 1 00:10:19.577 Keep Alive: Not Supported 00:10:19.577 00:10:19.577 NVM Command Set Attributes 00:10:19.577 ========================== 00:10:19.577 Submission Queue Entry Size 00:10:19.577 Max: 64 00:10:19.577 Min: 64 00:10:19.577 Completion Queue Entry Size 00:10:19.577 Max: 16 00:10:19.577 Min: 16 00:10:19.577 Number of Namespaces: 256 00:10:19.577 Compare Command: Supported 00:10:19.577 Write Uncorrectable Command: Not Supported 00:10:19.577 Dataset Management Command: Supported 00:10:19.577 Write Zeroes Command: Supported 00:10:19.577 Set Features Save Field: Supported 00:10:19.577 Reservations: Not Supported 00:10:19.577 Timestamp: Supported 00:10:19.577 Copy: Supported 00:10:19.577 Volatile Write Cache: Present 00:10:19.577 Atomic Write Unit (Normal): 1 00:10:19.577 Atomic Write Unit (PFail): 1 00:10:19.577 Atomic Compare & Write Unit: 1 00:10:19.577 Fused Compare & Write: Not Supported 00:10:19.577 Scatter-Gather List 00:10:19.577 SGL Command Set: Supported 00:10:19.577 SGL Keyed: Not Supported 00:10:19.577 SGL Bit Bucket Descriptor: Not Supported 00:10:19.577 SGL Metadata Pointer: Not Supported 00:10:19.577 Oversized SGL: Not Supported 00:10:19.577 SGL Metadata Address: Not Supported 00:10:19.577 SGL Offset: Not Supported 00:10:19.577 Transport SGL Data Block: Not Supported 00:10:19.577 Replay Protected Memory Block: Not Supported 00:10:19.577 00:10:19.577 Firmware Slot Information 00:10:19.577 ========================= 00:10:19.577 Active slot: 1 00:10:19.577 Slot 1 Firmware Revision: 1.0 00:10:19.577 00:10:19.577 00:10:19.577 Commands Supported and Effects 00:10:19.577 ============================== 00:10:19.577 Admin Commands 00:10:19.577 -------------- 00:10:19.577 Delete I/O Submission Queue (00h): Supported 00:10:19.577 Create I/O Submission Queue (01h): Supported 00:10:19.577 Get Log Page (02h): Supported 00:10:19.577 Delete I/O Completion Queue (04h): Supported 00:10:19.577 Create I/O Completion Queue (05h): Supported 00:10:19.577 Identify (06h): Supported 00:10:19.577 Abort (08h): Supported 00:10:19.577 Set Features (09h): Supported 00:10:19.577 Get Features (0Ah): Supported 00:10:19.577 Asynchronous Event Request (0Ch): Supported 00:10:19.577 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.577 Directive Send (19h): Supported 00:10:19.577 Directive Receive (1Ah): Supported 00:10:19.577 Virtualization Management (1Ch): Supported 00:10:19.577 Doorbell Buffer Config (7Ch): Supported 00:10:19.577 Format NVM (80h): Supported LBA-Change 00:10:19.577 I/O Commands 00:10:19.577 ------------ 00:10:19.577 Flush (00h): Supported LBA-Change 00:10:19.577 Write (01h): Supported LBA-Change 00:10:19.577 Read (02h): Supported 00:10:19.577 Compare (05h): Supported 00:10:19.577 Write Zeroes (08h): Supported LBA-Change 00:10:19.577 Dataset Management (09h): Supported LBA-Change 00:10:19.577 Unknown (0Ch): Supported 00:10:19.577 Unknown (12h): Supported 00:10:19.577 Copy (19h): Supported LBA-Change 00:10:19.577 Unknown (1Dh): Supported LBA-Change 00:10:19.577 00:10:19.577 Error Log 00:10:19.577 ========= 00:10:19.577 00:10:19.577 Arbitration 00:10:19.577 =========== 00:10:19.577 Arbitration Burst: no limit 00:10:19.577 00:10:19.577 Power Management 00:10:19.577 ================ 00:10:19.577 Number of Power States: 1 00:10:19.577 Current Power State: Power State #0 00:10:19.577 Power State #0: 00:10:19.577 Max Power: 25.00 W 00:10:19.577 Non-Operational State: Operational 00:10:19.577 Entry Latency: 16 microseconds 00:10:19.577 Exit Latency: 4 microseconds 00:10:19.577 Relative Read Throughput: 0 00:10:19.577 Relative Read Latency: 0 00:10:19.577 Relative Write Throughput: 0 00:10:19.577 Relative Write Latency: 0 00:10:19.577 Idle Power: Not Reported 00:10:19.577 Active Power: Not Reported 00:10:19.577 Non-Operational Permissive Mode: Not Supported 00:10:19.577 00:10:19.577 Health Information 00:10:19.577 ================== 00:10:19.577 Critical Warnings: 00:10:19.577 Available Spare Space: OK 00:10:19.577 Temperature: OK 00:10:19.577 Device Reliability: OK 00:10:19.577 Read Only: No 00:10:19.577 Volatile Memory Backup: OK 00:10:19.577 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.577 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.577 Available Spare: 0% 00:10:19.577 Available Spare Threshold: 0% 00:10:19.577 Life Percentage Used: 0% 00:10:19.577 Data Units Read: 1167 00:10:19.577 Data Units Written: 543 00:10:19.577 Host Read Commands: 60027 00:10:19.577 Host Write Commands: 29558 00:10:19.577 Controller Busy Time: 0 minutes 00:10:19.577 Power Cycles: 0 00:10:19.577 Power On Hours: 0 hours 00:10:19.577 Unsafe Shutdowns: 0 00:10:19.577 Unrecoverable Media Errors: 0 00:10:19.577 Lifetime Error Log Entries: 0 00:10:19.577 Warning Temperature Time: 0 minutes 00:10:19.577 Critical Temperature Time: 0 minutes 00:10:19.577 00:10:19.577 Number of Queues 00:10:19.577 ================ 00:10:19.577 Number of I/O Submission Queues: 64 00:10:19.577 Number of I/O Completion Queues: 64 00:10:19.577 00:10:19.577 ZNS Specific Controller Data 00:10:19.577 ============================ 00:10:19.577 Zone Append Size Limit: 0 00:10:19.577 00:10:19.577 00:10:19.577 Active Namespaces 00:10:19.577 ================= 00:10:19.577 Namespace ID:1 00:10:19.577 Error Recovery Timeout: Unlimited 00:10:19.577 Command Set Identifier: NVM (00h) 00:10:19.577 Deallocate: Supported 00:10:19.577 Deallocated/Unwritten Error: Supported 00:10:19.577 Deallocated Read Value: All 0x00 00:10:19.577 Deallocate in Write Zeroes: Not Supported 00:10:19.577 Deallocated Guard Field: 0xFFFF 00:10:19.577 Flush: Supported 00:10:19.577 Reservation: Not Supported 00:10:19.577 Namespace Sharing Capabilities: Private 00:10:19.577 Size (in LBAs): 1310720 (5GiB) 00:10:19.577 Capacity (in LBAs): 1310720 (5GiB) 00:10:19.577 Utilization (in LBAs): 1310720 (5GiB) 00:10:19.577 Thin Provisioning: Not Supported 00:10:19.577 Per-NS Atomic Units: No 00:10:19.577 Maximum Single Source Range Length: 128 00:10:19.577 Maximum Copy Length: 128 00:10:19.577 Maximum Source Range Count: 128 00:10:19.577 NGUID/EUI64 Never Reused: No 00:10:19.577 Namespace Write Protected: No 00:10:19.577 Number of LBA Formats: 8 00:10:19.577 Current LBA Format: LBA Format #04 00:10:19.577 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.577 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.577 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.577 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.577 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.577 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.577 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.577 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.577 00:10:19.577 21:01:33 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:19.577 21:01:33 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:10:19.836 ===================================================== 00:10:19.836 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:19.836 ===================================================== 00:10:19.836 Controller Capabilities/Features 00:10:19.836 ================================ 00:10:19.836 Vendor ID: 1b36 00:10:19.836 Subsystem Vendor ID: 1af4 00:10:19.836 Serial Number: 12342 00:10:19.836 Model Number: QEMU NVMe Ctrl 00:10:19.836 Firmware Version: 8.0.0 00:10:19.836 Recommended Arb Burst: 6 00:10:19.836 IEEE OUI Identifier: 00 54 52 00:10:19.836 Multi-path I/O 00:10:19.836 May have multiple subsystem ports: No 00:10:19.836 May have multiple controllers: No 00:10:19.836 Associated with SR-IOV VF: No 00:10:19.836 Max Data Transfer Size: 524288 00:10:19.836 Max Number of Namespaces: 256 00:10:19.836 Max Number of I/O Queues: 64 00:10:19.836 NVMe Specification Version (VS): 1.4 00:10:19.836 NVMe Specification Version (Identify): 1.4 00:10:19.836 Maximum Queue Entries: 2048 00:10:19.836 Contiguous Queues Required: Yes 00:10:19.836 Arbitration Mechanisms Supported 00:10:19.836 Weighted Round Robin: Not Supported 00:10:19.836 Vendor Specific: Not Supported 00:10:19.836 Reset Timeout: 7500 ms 00:10:19.836 Doorbell Stride: 4 bytes 00:10:19.836 NVM Subsystem Reset: Not Supported 00:10:19.836 Command Sets Supported 00:10:19.836 NVM Command Set: Supported 00:10:19.836 Boot Partition: Not Supported 00:10:19.836 Memory Page Size Minimum: 4096 bytes 00:10:19.836 Memory Page Size Maximum: 65536 bytes 00:10:19.836 Persistent Memory Region: Not Supported 00:10:19.836 Optional Asynchronous Events Supported 00:10:19.836 Namespace Attribute Notices: Supported 00:10:19.836 Firmware Activation Notices: Not Supported 00:10:19.836 ANA Change Notices: Not Supported 00:10:19.836 PLE Aggregate Log Change Notices: Not Supported 00:10:19.836 LBA Status Info Alert Notices: Not Supported 00:10:19.836 EGE Aggregate Log Change Notices: Not Supported 00:10:19.836 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.836 Zone Descriptor Change Notices: Not Supported 00:10:19.836 Discovery Log Change Notices: Not Supported 00:10:19.836 Controller Attributes 00:10:19.836 128-bit Host Identifier: Not Supported 00:10:19.836 Non-Operational Permissive Mode: Not Supported 00:10:19.836 NVM Sets: Not Supported 00:10:19.836 Read Recovery Levels: Not Supported 00:10:19.836 Endurance Groups: Not Supported 00:10:19.836 Predictable Latency Mode: Not Supported 00:10:19.836 Traffic Based Keep ALive: Not Supported 00:10:19.836 Namespace Granularity: Not Supported 00:10:19.836 SQ Associations: Not Supported 00:10:19.836 UUID List: Not Supported 00:10:19.836 Multi-Domain Subsystem: Not Supported 00:10:19.836 Fixed Capacity Management: Not Supported 00:10:19.836 Variable Capacity Management: Not Supported 00:10:19.836 Delete Endurance Group: Not Supported 00:10:19.836 Delete NVM Set: Not Supported 00:10:19.836 Extended LBA Formats Supported: Supported 00:10:19.836 Flexible Data Placement Supported: Not Supported 00:10:19.836 00:10:19.836 Controller Memory Buffer Support 00:10:19.836 ================================ 00:10:19.836 Supported: No 00:10:19.836 00:10:19.836 Persistent Memory Region Support 00:10:19.836 ================================ 00:10:19.836 Supported: No 00:10:19.836 00:10:19.836 Admin Command Set Attributes 00:10:19.836 ============================ 00:10:19.836 Security Send/Receive: Not Supported 00:10:19.836 Format NVM: Supported 00:10:19.836 Firmware Activate/Download: Not Supported 00:10:19.836 Namespace Management: Supported 00:10:19.836 Device Self-Test: Not Supported 00:10:19.836 Directives: Supported 00:10:19.836 NVMe-MI: Not Supported 00:10:19.836 Virtualization Management: Not Supported 00:10:19.836 Doorbell Buffer Config: Supported 00:10:19.836 Get LBA Status Capability: Not Supported 00:10:19.836 Command & Feature Lockdown Capability: Not Supported 00:10:19.836 Abort Command Limit: 4 00:10:19.836 Async Event Request Limit: 4 00:10:19.836 Number of Firmware Slots: N/A 00:10:19.836 Firmware Slot 1 Read-Only: N/A 00:10:19.836 Firmware Activation Without Reset: N/A 00:10:19.836 Multiple Update Detection Support: N/A 00:10:19.836 Firmware Update Granularity: No Information Provided 00:10:19.836 Per-Namespace SMART Log: Yes 00:10:19.836 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.836 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:19.836 Command Effects Log Page: Supported 00:10:19.836 Get Log Page Extended Data: Supported 00:10:19.836 Telemetry Log Pages: Not Supported 00:10:19.836 Persistent Event Log Pages: Not Supported 00:10:19.836 Supported Log Pages Log Page: May Support 00:10:19.836 Commands Supported & Effects Log Page: Not Supported 00:10:19.836 Feature Identifiers & Effects Log Page:May Support 00:10:19.836 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.836 Data Area 4 for Telemetry Log: Not Supported 00:10:19.836 Error Log Page Entries Supported: 1 00:10:19.836 Keep Alive: Not Supported 00:10:19.836 00:10:19.836 NVM Command Set Attributes 00:10:19.836 ========================== 00:10:19.836 Submission Queue Entry Size 00:10:19.836 Max: 64 00:10:19.836 Min: 64 00:10:19.836 Completion Queue Entry Size 00:10:19.836 Max: 16 00:10:19.836 Min: 16 00:10:19.836 Number of Namespaces: 256 00:10:19.836 Compare Command: Supported 00:10:19.836 Write Uncorrectable Command: Not Supported 00:10:19.836 Dataset Management Command: Supported 00:10:19.836 Write Zeroes Command: Supported 00:10:19.836 Set Features Save Field: Supported 00:10:19.836 Reservations: Not Supported 00:10:19.836 Timestamp: Supported 00:10:19.836 Copy: Supported 00:10:19.836 Volatile Write Cache: Present 00:10:19.836 Atomic Write Unit (Normal): 1 00:10:19.836 Atomic Write Unit (PFail): 1 00:10:19.836 Atomic Compare & Write Unit: 1 00:10:19.836 Fused Compare & Write: Not Supported 00:10:19.836 Scatter-Gather List 00:10:19.836 SGL Command Set: Supported 00:10:19.836 SGL Keyed: Not Supported 00:10:19.836 SGL Bit Bucket Descriptor: Not Supported 00:10:19.836 SGL Metadata Pointer: Not Supported 00:10:19.836 Oversized SGL: Not Supported 00:10:19.836 SGL Metadata Address: Not Supported 00:10:19.836 SGL Offset: Not Supported 00:10:19.836 Transport SGL Data Block: Not Supported 00:10:19.836 Replay Protected Memory Block: Not Supported 00:10:19.836 00:10:19.836 Firmware Slot Information 00:10:19.836 ========================= 00:10:19.836 Active slot: 1 00:10:19.836 Slot 1 Firmware Revision: 1.0 00:10:19.836 00:10:19.836 00:10:19.836 Commands Supported and Effects 00:10:19.836 ============================== 00:10:19.836 Admin Commands 00:10:19.836 -------------- 00:10:19.836 Delete I/O Submission Queue (00h): Supported 00:10:19.836 Create I/O Submission Queue (01h): Supported 00:10:19.836 Get Log Page (02h): Supported 00:10:19.836 Delete I/O Completion Queue (04h): Supported 00:10:19.836 Create I/O Completion Queue (05h): Supported 00:10:19.836 Identify (06h): Supported 00:10:19.836 Abort (08h): Supported 00:10:19.836 Set Features (09h): Supported 00:10:19.836 Get Features (0Ah): Supported 00:10:19.836 Asynchronous Event Request (0Ch): Supported 00:10:19.836 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.836 Directive Send (19h): Supported 00:10:19.836 Directive Receive (1Ah): Supported 00:10:19.836 Virtualization Management (1Ch): Supported 00:10:19.836 Doorbell Buffer Config (7Ch): Supported 00:10:19.836 Format NVM (80h): Supported LBA-Change 00:10:19.836 I/O Commands 00:10:19.836 ------------ 00:10:19.836 Flush (00h): Supported LBA-Change 00:10:19.836 Write (01h): Supported LBA-Change 00:10:19.836 Read (02h): Supported 00:10:19.836 Compare (05h): Supported 00:10:19.836 Write Zeroes (08h): Supported LBA-Change 00:10:19.836 Dataset Management (09h): Supported LBA-Change 00:10:19.836 Unknown (0Ch): Supported 00:10:19.836 Unknown (12h): Supported 00:10:19.836 Copy (19h): Supported LBA-Change 00:10:19.836 Unknown (1Dh): Supported LBA-Change 00:10:19.836 00:10:19.836 Error Log 00:10:19.836 ========= 00:10:19.836 00:10:19.836 Arbitration 00:10:19.836 =========== 00:10:19.836 Arbitration Burst: no limit 00:10:19.836 00:10:19.836 Power Management 00:10:19.836 ================ 00:10:19.836 Number of Power States: 1 00:10:19.836 Current Power State: Power State #0 00:10:19.836 Power State #0: 00:10:19.836 Max Power: 25.00 W 00:10:19.836 Non-Operational State: Operational 00:10:19.836 Entry Latency: 16 microseconds 00:10:19.836 Exit Latency: 4 microseconds 00:10:19.836 Relative Read Throughput: 0 00:10:19.836 Relative Read Latency: 0 00:10:19.836 Relative Write Throughput: 0 00:10:19.836 Relative Write Latency: 0 00:10:19.836 Idle Power: Not Reported 00:10:19.836 Active Power: Not Reported 00:10:19.836 Non-Operational Permissive Mode: Not Supported 00:10:19.836 00:10:19.836 Health Information 00:10:19.836 ================== 00:10:19.836 Critical Warnings: 00:10:19.836 Available Spare Space: OK 00:10:19.836 Temperature: OK 00:10:19.836 Device Reliability: OK 00:10:19.837 Read Only: No 00:10:19.837 Volatile Memory Backup: OK 00:10:19.837 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.837 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.837 Available Spare: 0% 00:10:19.837 Available Spare Threshold: 0% 00:10:19.837 Life Percentage Used: 0% 00:10:19.837 Data Units Read: 3612 00:10:19.837 Data Units Written: 1671 00:10:19.837 Host Read Commands: 181624 00:10:19.837 Host Write Commands: 89309 00:10:19.837 Controller Busy Time: 0 minutes 00:10:19.837 Power Cycles: 0 00:10:19.837 Power On Hours: 0 hours 00:10:19.837 Unsafe Shutdowns: 0 00:10:19.837 Unrecoverable Media Errors: 0 00:10:19.837 Lifetime Error Log Entries: 0 00:10:19.837 Warning Temperature Time: 0 minutes 00:10:19.837 Critical Temperature Time: 0 minutes 00:10:19.837 00:10:19.837 Number of Queues 00:10:19.837 ================ 00:10:19.837 Number of I/O Submission Queues: 64 00:10:19.837 Number of I/O Completion Queues: 64 00:10:19.837 00:10:19.837 ZNS Specific Controller Data 00:10:19.837 ============================ 00:10:19.837 Zone Append Size Limit: 0 00:10:19.837 00:10:19.837 00:10:19.837 Active Namespaces 00:10:19.837 ================= 00:10:19.837 Namespace ID:1 00:10:19.837 Error Recovery Timeout: Unlimited 00:10:19.837 Command Set Identifier: NVM (00h) 00:10:19.837 Deallocate: Supported 00:10:19.837 Deallocated/Unwritten Error: Supported 00:10:19.837 Deallocated Read Value: All 0x00 00:10:19.837 Deallocate in Write Zeroes: Not Supported 00:10:19.837 Deallocated Guard Field: 0xFFFF 00:10:19.837 Flush: Supported 00:10:19.837 Reservation: Not Supported 00:10:19.837 Namespace Sharing Capabilities: Private 00:10:19.837 Size (in LBAs): 1048576 (4GiB) 00:10:19.837 Capacity (in LBAs): 1048576 (4GiB) 00:10:19.837 Utilization (in LBAs): 1048576 (4GiB) 00:10:19.837 Thin Provisioning: Not Supported 00:10:19.837 Per-NS Atomic Units: No 00:10:19.837 Maximum Single Source Range Length: 128 00:10:19.837 Maximum Copy Length: 128 00:10:19.837 Maximum Source Range Count: 128 00:10:19.837 NGUID/EUI64 Never Reused: No 00:10:19.837 Namespace Write Protected: No 00:10:19.837 Number of LBA Formats: 8 00:10:19.837 Current LBA Format: LBA Format #04 00:10:19.837 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.837 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.837 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.837 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.837 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.837 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.837 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.837 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.837 00:10:19.837 Namespace ID:2 00:10:19.837 Error Recovery Timeout: Unlimited 00:10:19.837 Command Set Identifier: NVM (00h) 00:10:19.837 Deallocate: Supported 00:10:19.837 Deallocated/Unwritten Error: Supported 00:10:19.837 Deallocated Read Value: All 0x00 00:10:19.837 Deallocate in Write Zeroes: Not Supported 00:10:19.837 Deallocated Guard Field: 0xFFFF 00:10:19.837 Flush: Supported 00:10:19.837 Reservation: Not Supported 00:10:19.837 Namespace Sharing Capabilities: Private 00:10:19.837 Size (in LBAs): 1048576 (4GiB) 00:10:19.837 Capacity (in LBAs): 1048576 (4GiB) 00:10:19.837 Utilization (in LBAs): 1048576 (4GiB) 00:10:19.837 Thin Provisioning: Not Supported 00:10:19.837 Per-NS Atomic Units: No 00:10:19.837 Maximum Single Source Range Length: 128 00:10:19.837 Maximum Copy Length: 128 00:10:19.837 Maximum Source Range Count: 128 00:10:19.837 NGUID/EUI64 Never Reused: No 00:10:19.837 Namespace Write Protected: No 00:10:19.837 Number of LBA Formats: 8 00:10:19.837 Current LBA Format: LBA Format #04 00:10:19.837 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.837 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.837 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.837 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.837 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.837 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.837 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.837 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.837 00:10:19.837 Namespace ID:3 00:10:19.837 Error Recovery Timeout: Unlimited 00:10:19.837 Command Set Identifier: NVM (00h) 00:10:19.837 Deallocate: Supported 00:10:19.837 Deallocated/Unwritten Error: Supported 00:10:19.837 Deallocated Read Value: All 0x00 00:10:19.837 Deallocate in Write Zeroes: Not Supported 00:10:19.837 Deallocated Guard Field: 0xFFFF 00:10:19.837 Flush: Supported 00:10:19.837 Reservation: Not Supported 00:10:19.837 Namespace Sharing Capabilities: Private 00:10:19.837 Size (in LBAs): 1048576 (4GiB) 00:10:19.837 Capacity (in LBAs): 1048576 (4GiB) 00:10:19.837 Utilization (in LBAs): 1048576 (4GiB) 00:10:19.837 Thin Provisioning: Not Supported 00:10:19.837 Per-NS Atomic Units: No 00:10:19.837 Maximum Single Source Range Length: 128 00:10:19.837 Maximum Copy Length: 128 00:10:19.837 Maximum Source Range Count: 128 00:10:19.837 NGUID/EUI64 Never Reused: No 00:10:19.837 Namespace Write Protected: No 00:10:19.837 Number of LBA Formats: 8 00:10:19.837 Current LBA Format: LBA Format #04 00:10:19.837 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.837 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.837 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.837 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.837 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.837 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.837 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.837 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.837 00:10:19.837 21:01:33 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:19.837 21:01:33 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:10:20.095 ===================================================== 00:10:20.095 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:20.095 ===================================================== 00:10:20.095 Controller Capabilities/Features 00:10:20.095 ================================ 00:10:20.095 Vendor ID: 1b36 00:10:20.095 Subsystem Vendor ID: 1af4 00:10:20.095 Serial Number: 12343 00:10:20.095 Model Number: QEMU NVMe Ctrl 00:10:20.095 Firmware Version: 8.0.0 00:10:20.095 Recommended Arb Burst: 6 00:10:20.095 IEEE OUI Identifier: 00 54 52 00:10:20.095 Multi-path I/O 00:10:20.095 May have multiple subsystem ports: No 00:10:20.095 May have multiple controllers: Yes 00:10:20.095 Associated with SR-IOV VF: No 00:10:20.095 Max Data Transfer Size: 524288 00:10:20.095 Max Number of Namespaces: 256 00:10:20.095 Max Number of I/O Queues: 64 00:10:20.095 NVMe Specification Version (VS): 1.4 00:10:20.095 NVMe Specification Version (Identify): 1.4 00:10:20.095 Maximum Queue Entries: 2048 00:10:20.095 Contiguous Queues Required: Yes 00:10:20.095 Arbitration Mechanisms Supported 00:10:20.095 Weighted Round Robin: Not Supported 00:10:20.095 Vendor Specific: Not Supported 00:10:20.095 Reset Timeout: 7500 ms 00:10:20.095 Doorbell Stride: 4 bytes 00:10:20.095 NVM Subsystem Reset: Not Supported 00:10:20.095 Command Sets Supported 00:10:20.095 NVM Command Set: Supported 00:10:20.095 Boot Partition: Not Supported 00:10:20.095 Memory Page Size Minimum: 4096 bytes 00:10:20.095 Memory Page Size Maximum: 65536 bytes 00:10:20.095 Persistent Memory Region: Not Supported 00:10:20.095 Optional Asynchronous Events Supported 00:10:20.095 Namespace Attribute Notices: Supported 00:10:20.095 Firmware Activation Notices: Not Supported 00:10:20.095 ANA Change Notices: Not Supported 00:10:20.095 PLE Aggregate Log Change Notices: Not Supported 00:10:20.095 LBA Status Info Alert Notices: Not Supported 00:10:20.095 EGE Aggregate Log Change Notices: Not Supported 00:10:20.095 Normal NVM Subsystem Shutdown event: Not Supported 00:10:20.095 Zone Descriptor Change Notices: Not Supported 00:10:20.095 Discovery Log Change Notices: Not Supported 00:10:20.095 Controller Attributes 00:10:20.095 128-bit Host Identifier: Not Supported 00:10:20.095 Non-Operational Permissive Mode: Not Supported 00:10:20.095 NVM Sets: Not Supported 00:10:20.095 Read Recovery Levels: Not Supported 00:10:20.095 Endurance Groups: Supported 00:10:20.095 Predictable Latency Mode: Not Supported 00:10:20.095 Traffic Based Keep ALive: Not Supported 00:10:20.095 Namespace Granularity: Not Supported 00:10:20.095 SQ Associations: Not Supported 00:10:20.095 UUID List: Not Supported 00:10:20.095 Multi-Domain Subsystem: Not Supported 00:10:20.095 Fixed Capacity Management: Not Supported 00:10:20.095 Variable Capacity Management: Not Supported 00:10:20.095 Delete Endurance Group: Not Supported 00:10:20.095 Delete NVM Set: Not Supported 00:10:20.095 Extended LBA Formats Supported: Supported 00:10:20.095 Flexible Data Placement Supported: Supported 00:10:20.095 00:10:20.095 Controller Memory Buffer Support 00:10:20.095 ================================ 00:10:20.095 Supported: No 00:10:20.095 00:10:20.095 Persistent Memory Region Support 00:10:20.095 ================================ 00:10:20.095 Supported: No 00:10:20.095 00:10:20.095 Admin Command Set Attributes 00:10:20.095 ============================ 00:10:20.095 Security Send/Receive: Not Supported 00:10:20.095 Format NVM: Supported 00:10:20.095 Firmware Activate/Download: Not Supported 00:10:20.095 Namespace Management: Supported 00:10:20.095 Device Self-Test: Not Supported 00:10:20.095 Directives: Supported 00:10:20.095 NVMe-MI: Not Supported 00:10:20.095 Virtualization Management: Not Supported 00:10:20.095 Doorbell Buffer Config: Supported 00:10:20.095 Get LBA Status Capability: Not Supported 00:10:20.095 Command & Feature Lockdown Capability: Not Supported 00:10:20.095 Abort Command Limit: 4 00:10:20.095 Async Event Request Limit: 4 00:10:20.095 Number of Firmware Slots: N/A 00:10:20.095 Firmware Slot 1 Read-Only: N/A 00:10:20.095 Firmware Activation Without Reset: N/A 00:10:20.095 Multiple Update Detection Support: N/A 00:10:20.095 Firmware Update Granularity: No Information Provided 00:10:20.095 Per-Namespace SMART Log: Yes 00:10:20.095 Asymmetric Namespace Access Log Page: Not Supported 00:10:20.095 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:20.095 Command Effects Log Page: Supported 00:10:20.095 Get Log Page Extended Data: Supported 00:10:20.095 Telemetry Log Pages: Not Supported 00:10:20.096 Persistent Event Log Pages: Not Supported 00:10:20.096 Supported Log Pages Log Page: May Support 00:10:20.096 Commands Supported & Effects Log Page: Not Supported 00:10:20.096 Feature Identifiers & Effects Log Page:May Support 00:10:20.096 NVMe-MI Commands & Effects Log Page: May Support 00:10:20.096 Data Area 4 for Telemetry Log: Not Supported 00:10:20.096 Error Log Page Entries Supported: 1 00:10:20.096 Keep Alive: Not Supported 00:10:20.096 00:10:20.096 NVM Command Set Attributes 00:10:20.096 ========================== 00:10:20.096 Submission Queue Entry Size 00:10:20.096 Max: 64 00:10:20.096 Min: 64 00:10:20.096 Completion Queue Entry Size 00:10:20.096 Max: 16 00:10:20.096 Min: 16 00:10:20.096 Number of Namespaces: 256 00:10:20.096 Compare Command: Supported 00:10:20.096 Write Uncorrectable Command: Not Supported 00:10:20.096 Dataset Management Command: Supported 00:10:20.096 Write Zeroes Command: Supported 00:10:20.096 Set Features Save Field: Supported 00:10:20.096 Reservations: Not Supported 00:10:20.096 Timestamp: Supported 00:10:20.096 Copy: Supported 00:10:20.096 Volatile Write Cache: Present 00:10:20.096 Atomic Write Unit (Normal): 1 00:10:20.096 Atomic Write Unit (PFail): 1 00:10:20.096 Atomic Compare & Write Unit: 1 00:10:20.096 Fused Compare & Write: Not Supported 00:10:20.096 Scatter-Gather List 00:10:20.096 SGL Command Set: Supported 00:10:20.096 SGL Keyed: Not Supported 00:10:20.096 SGL Bit Bucket Descriptor: Not Supported 00:10:20.096 SGL Metadata Pointer: Not Supported 00:10:20.096 Oversized SGL: Not Supported 00:10:20.096 SGL Metadata Address: Not Supported 00:10:20.096 SGL Offset: Not Supported 00:10:20.096 Transport SGL Data Block: Not Supported 00:10:20.096 Replay Protected Memory Block: Not Supported 00:10:20.096 00:10:20.096 Firmware Slot Information 00:10:20.096 ========================= 00:10:20.096 Active slot: 1 00:10:20.096 Slot 1 Firmware Revision: 1.0 00:10:20.096 00:10:20.096 00:10:20.096 Commands Supported and Effects 00:10:20.096 ============================== 00:10:20.096 Admin Commands 00:10:20.096 -------------- 00:10:20.096 Delete I/O Submission Queue (00h): Supported 00:10:20.096 Create I/O Submission Queue (01h): Supported 00:10:20.096 Get Log Page (02h): Supported 00:10:20.096 Delete I/O Completion Queue (04h): Supported 00:10:20.096 Create I/O Completion Queue (05h): Supported 00:10:20.096 Identify (06h): Supported 00:10:20.096 Abort (08h): Supported 00:10:20.096 Set Features (09h): Supported 00:10:20.096 Get Features (0Ah): Supported 00:10:20.096 Asynchronous Event Request (0Ch): Supported 00:10:20.096 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:20.096 Directive Send (19h): Supported 00:10:20.096 Directive Receive (1Ah): Supported 00:10:20.096 Virtualization Management (1Ch): Supported 00:10:20.096 Doorbell Buffer Config (7Ch): Supported 00:10:20.096 Format NVM (80h): Supported LBA-Change 00:10:20.096 I/O Commands 00:10:20.096 ------------ 00:10:20.096 Flush (00h): Supported LBA-Change 00:10:20.096 Write (01h): Supported LBA-Change 00:10:20.096 Read (02h): Supported 00:10:20.096 Compare (05h): Supported 00:10:20.096 Write Zeroes (08h): Supported LBA-Change 00:10:20.096 Dataset Management (09h): Supported LBA-Change 00:10:20.096 Unknown (0Ch): Supported 00:10:20.096 Unknown (12h): Supported 00:10:20.096 Copy (19h): Supported LBA-Change 00:10:20.096 Unknown (1Dh): Supported LBA-Change 00:10:20.096 00:10:20.096 Error Log 00:10:20.096 ========= 00:10:20.096 00:10:20.096 Arbitration 00:10:20.096 =========== 00:10:20.096 Arbitration Burst: no limit 00:10:20.096 00:10:20.096 Power Management 00:10:20.096 ================ 00:10:20.096 Number of Power States: 1 00:10:20.096 Current Power State: Power State #0 00:10:20.096 Power State #0: 00:10:20.096 Max Power: 25.00 W 00:10:20.096 Non-Operational State: Operational 00:10:20.096 Entry Latency: 16 microseconds 00:10:20.096 Exit Latency: 4 microseconds 00:10:20.096 Relative Read Throughput: 0 00:10:20.096 Relative Read Latency: 0 00:10:20.096 Relative Write Throughput: 0 00:10:20.096 Relative Write Latency: 0 00:10:20.096 Idle Power: Not Reported 00:10:20.096 Active Power: Not Reported 00:10:20.096 Non-Operational Permissive Mode: Not Supported 00:10:20.096 00:10:20.096 Health Information 00:10:20.096 ================== 00:10:20.096 Critical Warnings: 00:10:20.096 Available Spare Space: OK 00:10:20.096 Temperature: OK 00:10:20.096 Device Reliability: OK 00:10:20.096 Read Only: No 00:10:20.096 Volatile Memory Backup: OK 00:10:20.096 Current Temperature: 323 Kelvin (50 Celsius) 00:10:20.096 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:20.096 Available Spare: 0% 00:10:20.096 Available Spare Threshold: 0% 00:10:20.096 Life Percentage Used: 0% 00:10:20.096 Data Units Read: 1239 00:10:20.096 Data Units Written: 575 00:10:20.096 Host Read Commands: 60785 00:10:20.096 Host Write Commands: 29920 00:10:20.096 Controller Busy Time: 0 minutes 00:10:20.096 Power Cycles: 0 00:10:20.096 Power On Hours: 0 hours 00:10:20.096 Unsafe Shutdowns: 0 00:10:20.096 Unrecoverable Media Errors: 0 00:10:20.096 Lifetime Error Log Entries: 0 00:10:20.096 Warning Temperature Time: 0 minutes 00:10:20.096 Critical Temperature Time: 0 minutes 00:10:20.096 00:10:20.096 Number of Queues 00:10:20.096 ================ 00:10:20.096 Number of I/O Submission Queues: 64 00:10:20.096 Number of I/O Completion Queues: 64 00:10:20.096 00:10:20.096 ZNS Specific Controller Data 00:10:20.096 ============================ 00:10:20.096 Zone Append Size Limit: 0 00:10:20.096 00:10:20.096 00:10:20.096 Active Namespaces 00:10:20.096 ================= 00:10:20.096 Namespace ID:1 00:10:20.096 Error Recovery Timeout: Unlimited 00:10:20.096 Command Set Identifier: NVM (00h) 00:10:20.096 Deallocate: Supported 00:10:20.096 Deallocated/Unwritten Error: Supported 00:10:20.096 Deallocated Read Value: All 0x00 00:10:20.096 Deallocate in Write Zeroes: Not Supported 00:10:20.096 Deallocated Guard Field: 0xFFFF 00:10:20.096 Flush: Supported 00:10:20.096 Reservation: Not Supported 00:10:20.096 Namespace Sharing Capabilities: Multiple Controllers 00:10:20.096 Size (in LBAs): 262144 (1GiB) 00:10:20.096 Capacity (in LBAs): 262144 (1GiB) 00:10:20.096 Utilization (in LBAs): 262144 (1GiB) 00:10:20.096 Thin Provisioning: Not Supported 00:10:20.096 Per-NS Atomic Units: No 00:10:20.096 Maximum Single Source Range Length: 128 00:10:20.096 Maximum Copy Length: 128 00:10:20.096 Maximum Source Range Count: 128 00:10:20.096 NGUID/EUI64 Never Reused: No 00:10:20.096 Namespace Write Protected: No 00:10:20.096 Endurance group ID: 1 00:10:20.096 Number of LBA Formats: 8 00:10:20.096 Current LBA Format: LBA Format #04 00:10:20.096 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:20.096 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:20.096 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:20.096 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:20.096 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:20.096 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:20.096 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:20.096 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:20.096 00:10:20.096 Get Feature FDP: 00:10:20.096 ================ 00:10:20.096 Enabled: Yes 00:10:20.096 FDP configuration index: 0 00:10:20.096 00:10:20.096 FDP configurations log page 00:10:20.096 =========================== 00:10:20.096 Number of FDP configurations: 1 00:10:20.096 Version: 0 00:10:20.096 Size: 112 00:10:20.096 FDP Configuration Descriptor: 0 00:10:20.096 Descriptor Size: 96 00:10:20.096 Reclaim Group Identifier format: 2 00:10:20.096 FDP Volatile Write Cache: Not Present 00:10:20.096 FDP Configuration: Valid 00:10:20.096 Vendor Specific Size: 0 00:10:20.096 Number of Reclaim Groups: 2 00:10:20.096 Number of Recalim Unit Handles: 8 00:10:20.096 Max Placement Identifiers: 128 00:10:20.096 Number of Namespaces Suppprted: 256 00:10:20.096 Reclaim unit Nominal Size: 6000000 bytes 00:10:20.096 Estimated Reclaim Unit Time Limit: Not Reported 00:10:20.096 RUH Desc #000: RUH Type: Initially Isolated 00:10:20.096 RUH Desc #001: RUH Type: Initially Isolated 00:10:20.096 RUH Desc #002: RUH Type: Initially Isolated 00:10:20.096 RUH Desc #003: RUH Type: Initially Isolated 00:10:20.096 RUH Desc #004: RUH Type: Initially Isolated 00:10:20.096 RUH Desc #005: RUH Type: Initially Isolated 00:10:20.096 RUH Desc #006: RUH Type: Initially Isolated 00:10:20.096 RUH Desc #007: RUH Type: Initially Isolated 00:10:20.096 00:10:20.096 FDP reclaim unit handle usage log page 00:10:20.353 ====================================== 00:10:20.353 Number of Reclaim Unit Handles: 8 00:10:20.353 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:20.353 RUH Usage Desc #001: RUH Attributes: Unused 00:10:20.353 RUH Usage Desc #002: RUH Attributes: Unused 00:10:20.353 RUH Usage Desc #003: RUH Attributes: Unused 00:10:20.353 RUH Usage Desc #004: RUH Attributes: Unused 00:10:20.353 RUH Usage Desc #005: RUH Attributes: Unused 00:10:20.353 RUH Usage Desc #006: RUH Attributes: Unused 00:10:20.353 RUH Usage Desc #007: RUH Attributes: Unused 00:10:20.353 00:10:20.353 FDP statistics log page 00:10:20.353 ======================= 00:10:20.353 Host bytes with metadata written: 372441088 00:10:20.353 Media bytes with metadata written: 372572160 00:10:20.353 Media bytes erased: 0 00:10:20.353 00:10:20.353 FDP events log page 00:10:20.353 =================== 00:10:20.353 Number of FDP events: 0 00:10:20.353 00:10:20.353 ************************************ 00:10:20.353 END TEST nvme_identify 00:10:20.353 ************************************ 00:10:20.353 00:10:20.353 real 0m1.578s 00:10:20.353 user 0m0.613s 00:10:20.353 sys 0m0.755s 00:10:20.353 21:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.353 21:01:34 -- common/autotest_common.sh@10 -- # set +x 00:10:20.353 21:01:34 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:20.353 21:01:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:20.353 21:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:20.353 21:01:34 -- common/autotest_common.sh@10 -- # set +x 00:10:20.353 ************************************ 00:10:20.353 START TEST nvme_perf 00:10:20.353 ************************************ 00:10:20.353 21:01:34 -- common/autotest_common.sh@1104 -- # nvme_perf 00:10:20.353 21:01:34 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:21.729 Initializing NVMe Controllers 00:10:21.729 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:21.729 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:21.729 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:21.729 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:21.729 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:21.729 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:21.729 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:21.729 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:21.729 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:21.729 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:21.729 Initialization complete. Launching workers. 00:10:21.729 ======================================================== 00:10:21.729 Latency(us) 00:10:21.729 Device Information : IOPS MiB/s Average min max 00:10:21.729 PCIE (0000:00:06.0) NSID 1 from core 0: 14129.76 165.58 9052.34 6570.46 42148.26 00:10:21.729 PCIE (0000:00:07.0) NSID 1 from core 0: 14129.76 165.58 9041.40 6708.32 40312.57 00:10:21.729 PCIE (0000:00:09.0) NSID 1 from core 0: 14129.76 165.58 9028.89 6990.05 39232.47 00:10:21.729 PCIE (0000:00:08.0) NSID 1 from core 0: 14129.76 165.58 9016.22 6923.36 37301.11 00:10:21.729 PCIE (0000:00:08.0) NSID 2 from core 0: 14257.06 167.07 8923.24 6783.12 25136.33 00:10:21.729 PCIE (0000:00:08.0) NSID 3 from core 0: 14257.06 167.07 8911.14 6818.79 23113.89 00:10:21.729 ======================================================== 00:10:21.729 Total : 85033.17 996.48 8995.30 6570.46 42148.26 00:10:21.729 00:10:21.729 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:21.729 ================================================================================= 00:10:21.729 1.00000% : 7208.960us 00:10:21.729 10.00000% : 7685.585us 00:10:21.729 25.00000% : 8162.211us 00:10:21.729 50.00000% : 8757.993us 00:10:21.729 75.00000% : 9413.353us 00:10:21.729 90.00000% : 9949.556us 00:10:21.729 95.00000% : 10485.760us 00:10:21.729 98.00000% : 11260.276us 00:10:21.729 99.00000% : 14000.873us 00:10:21.729 99.50000% : 39559.913us 00:10:21.729 99.90000% : 41704.727us 00:10:21.729 99.99000% : 42181.353us 00:10:21.729 99.99900% : 42181.353us 00:10:21.729 99.99990% : 42181.353us 00:10:21.729 99.99999% : 42181.353us 00:10:21.729 00:10:21.729 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:21.729 ================================================================================= 00:10:21.729 1.00000% : 7357.905us 00:10:21.729 10.00000% : 7804.742us 00:10:21.729 25.00000% : 8221.789us 00:10:21.729 50.00000% : 8757.993us 00:10:21.729 75.00000% : 9294.196us 00:10:21.729 90.00000% : 9830.400us 00:10:21.729 95.00000% : 10247.447us 00:10:21.729 98.00000% : 11319.855us 00:10:21.729 99.00000% : 14954.124us 00:10:21.729 99.50000% : 37653.411us 00:10:21.729 99.90000% : 39798.225us 00:10:21.729 99.99000% : 40513.164us 00:10:21.729 99.99900% : 40513.164us 00:10:21.729 99.99990% : 40513.164us 00:10:21.729 99.99999% : 40513.164us 00:10:21.729 00:10:21.729 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:21.729 ================================================================================= 00:10:21.729 1.00000% : 7387.695us 00:10:21.729 10.00000% : 7804.742us 00:10:21.729 25.00000% : 8221.789us 00:10:21.729 50.00000% : 8757.993us 00:10:21.729 75.00000% : 9294.196us 00:10:21.729 90.00000% : 9830.400us 00:10:21.729 95.00000% : 10307.025us 00:10:21.729 98.00000% : 11260.276us 00:10:21.729 99.00000% : 14000.873us 00:10:21.729 99.50000% : 36700.160us 00:10:21.729 99.90000% : 38844.975us 00:10:21.729 99.99000% : 39321.600us 00:10:21.729 99.99900% : 39321.600us 00:10:21.729 99.99990% : 39321.600us 00:10:21.729 99.99999% : 39321.600us 00:10:21.729 00:10:21.729 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:21.729 ================================================================================= 00:10:21.729 1.00000% : 7387.695us 00:10:21.729 10.00000% : 7804.742us 00:10:21.729 25.00000% : 8221.789us 00:10:21.729 50.00000% : 8757.993us 00:10:21.729 75.00000% : 9294.196us 00:10:21.729 90.00000% : 9830.400us 00:10:21.729 95.00000% : 10366.604us 00:10:21.729 98.00000% : 11319.855us 00:10:21.729 99.00000% : 13107.200us 00:10:21.729 99.50000% : 34793.658us 00:10:21.729 99.90000% : 36938.473us 00:10:21.729 99.99000% : 37415.098us 00:10:21.729 99.99900% : 37415.098us 00:10:21.729 99.99990% : 37415.098us 00:10:21.729 99.99999% : 37415.098us 00:10:21.729 00:10:21.729 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:21.729 ================================================================================= 00:10:21.729 1.00000% : 7417.484us 00:10:21.729 10.00000% : 7804.742us 00:10:21.729 25.00000% : 8221.789us 00:10:21.729 50.00000% : 8757.993us 00:10:21.729 75.00000% : 9294.196us 00:10:21.729 90.00000% : 9889.978us 00:10:21.729 95.00000% : 10366.604us 00:10:21.729 98.00000% : 11498.589us 00:10:21.729 99.00000% : 12749.731us 00:10:21.730 99.50000% : 22639.709us 00:10:21.730 99.90000% : 24665.367us 00:10:21.730 99.99000% : 25141.993us 00:10:21.730 99.99900% : 25141.993us 00:10:21.730 99.99990% : 25141.993us 00:10:21.730 99.99999% : 25141.993us 00:10:21.730 00:10:21.730 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:21.730 ================================================================================= 00:10:21.730 1.00000% : 7387.695us 00:10:21.730 10.00000% : 7804.742us 00:10:21.730 25.00000% : 8221.789us 00:10:21.730 50.00000% : 8757.993us 00:10:21.730 75.00000% : 9294.196us 00:10:21.730 90.00000% : 9889.978us 00:10:21.730 95.00000% : 10366.604us 00:10:21.730 98.00000% : 11796.480us 00:10:21.730 99.00000% : 13405.091us 00:10:21.730 99.50000% : 20614.051us 00:10:21.730 99.90000% : 22639.709us 00:10:21.730 99.99000% : 23116.335us 00:10:21.730 99.99900% : 23116.335us 00:10:21.730 99.99990% : 23116.335us 00:10:21.730 99.99999% : 23116.335us 00:10:21.730 00:10:21.730 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:21.730 ============================================================================== 00:10:21.730 Range in us Cumulative IO count 00:10:21.730 6553.600 - 6583.389: 0.0070% ( 1) 00:10:21.730 6583.389 - 6613.178: 0.0211% ( 2) 00:10:21.730 6642.967 - 6672.756: 0.0422% ( 3) 00:10:21.730 6672.756 - 6702.545: 0.0633% ( 3) 00:10:21.730 6702.545 - 6732.335: 0.0845% ( 3) 00:10:21.730 6732.335 - 6762.124: 0.1197% ( 5) 00:10:21.730 6762.124 - 6791.913: 0.1267% ( 1) 00:10:21.730 6791.913 - 6821.702: 0.1548% ( 4) 00:10:21.730 6821.702 - 6851.491: 0.1619% ( 1) 00:10:21.730 6851.491 - 6881.280: 0.1830% ( 3) 00:10:21.730 6881.280 - 6911.069: 0.2041% ( 3) 00:10:21.730 6911.069 - 6940.858: 0.2393% ( 5) 00:10:21.730 6940.858 - 6970.647: 0.2745% ( 5) 00:10:21.730 6970.647 - 7000.436: 0.2956% ( 3) 00:10:21.730 7000.436 - 7030.225: 0.3449% ( 7) 00:10:21.730 7030.225 - 7060.015: 0.3801% ( 5) 00:10:21.730 7060.015 - 7089.804: 0.4575% ( 11) 00:10:21.730 7089.804 - 7119.593: 0.5912% ( 19) 00:10:21.730 7119.593 - 7149.382: 0.7390% ( 21) 00:10:21.730 7149.382 - 7179.171: 0.9079% ( 24) 00:10:21.730 7179.171 - 7208.960: 1.0980% ( 27) 00:10:21.730 7208.960 - 7238.749: 1.3584% ( 37) 00:10:21.730 7238.749 - 7268.538: 1.6540% ( 42) 00:10:21.730 7268.538 - 7298.327: 2.0059% ( 50) 00:10:21.730 7298.327 - 7328.116: 2.4001% ( 56) 00:10:21.730 7328.116 - 7357.905: 2.8998% ( 71) 00:10:21.730 7357.905 - 7387.695: 3.4206% ( 74) 00:10:21.730 7387.695 - 7417.484: 3.9907% ( 81) 00:10:21.730 7417.484 - 7447.273: 4.6101% ( 88) 00:10:21.730 7447.273 - 7477.062: 5.2013% ( 84) 00:10:21.730 7477.062 - 7506.851: 5.8981% ( 99) 00:10:21.730 7506.851 - 7536.640: 6.6371% ( 105) 00:10:21.730 7536.640 - 7566.429: 7.3761% ( 105) 00:10:21.730 7566.429 - 7596.218: 8.2066% ( 118) 00:10:21.730 7596.218 - 7626.007: 9.0160% ( 115) 00:10:21.730 7626.007 - 7685.585: 10.7967% ( 253) 00:10:21.730 7685.585 - 7745.164: 12.6197% ( 259) 00:10:21.730 7745.164 - 7804.742: 14.5411% ( 273) 00:10:21.730 7804.742 - 7864.320: 16.4977% ( 278) 00:10:21.730 7864.320 - 7923.898: 18.5811% ( 296) 00:10:21.730 7923.898 - 7983.476: 20.6996% ( 301) 00:10:21.730 7983.476 - 8043.055: 22.7126% ( 286) 00:10:21.730 8043.055 - 8102.633: 24.9578% ( 319) 00:10:21.730 8102.633 - 8162.211: 27.2523% ( 326) 00:10:21.730 8162.211 - 8221.789: 29.5819% ( 331) 00:10:21.730 8221.789 - 8281.367: 31.9749% ( 340) 00:10:21.730 8281.367 - 8340.945: 34.3398% ( 336) 00:10:21.730 8340.945 - 8400.524: 36.7821% ( 347) 00:10:21.730 8400.524 - 8460.102: 39.1540% ( 337) 00:10:21.730 8460.102 - 8519.680: 41.4696% ( 329) 00:10:21.730 8519.680 - 8579.258: 43.9400% ( 351) 00:10:21.730 8579.258 - 8638.836: 46.3190% ( 338) 00:10:21.730 8638.836 - 8698.415: 48.6698% ( 334) 00:10:21.730 8698.415 - 8757.993: 51.0628% ( 340) 00:10:21.730 8757.993 - 8817.571: 53.4206% ( 335) 00:10:21.730 8817.571 - 8877.149: 55.8347% ( 343) 00:10:21.730 8877.149 - 8936.727: 58.1926% ( 335) 00:10:21.730 8936.727 - 8996.305: 60.5997% ( 342) 00:10:21.730 8996.305 - 9055.884: 63.0419% ( 347) 00:10:21.730 9055.884 - 9115.462: 65.4068% ( 336) 00:10:21.730 9115.462 - 9175.040: 67.8350% ( 345) 00:10:21.730 9175.040 - 9234.618: 70.2984% ( 350) 00:10:21.730 9234.618 - 9294.196: 72.6140% ( 329) 00:10:21.730 9294.196 - 9353.775: 74.8592% ( 319) 00:10:21.730 9353.775 - 9413.353: 77.1115% ( 320) 00:10:21.730 9413.353 - 9472.931: 79.1878% ( 295) 00:10:21.730 9472.931 - 9532.509: 81.1796% ( 283) 00:10:21.730 9532.509 - 9592.087: 83.0025% ( 259) 00:10:21.730 9592.087 - 9651.665: 84.6988% ( 241) 00:10:21.730 9651.665 - 9711.244: 86.2965% ( 227) 00:10:21.730 9711.244 - 9770.822: 87.6548% ( 193) 00:10:21.730 9770.822 - 9830.400: 88.7317% ( 153) 00:10:21.730 9830.400 - 9889.978: 89.7452% ( 144) 00:10:21.730 9889.978 - 9949.556: 90.5828% ( 119) 00:10:21.730 9949.556 - 10009.135: 91.3359% ( 107) 00:10:21.730 10009.135 - 10068.713: 91.9552% ( 88) 00:10:21.730 10068.713 - 10128.291: 92.5465% ( 84) 00:10:21.730 10128.291 - 10187.869: 93.1166% ( 81) 00:10:21.730 10187.869 - 10247.447: 93.6092% ( 70) 00:10:21.730 10247.447 - 10307.025: 94.0738% ( 66) 00:10:21.730 10307.025 - 10366.604: 94.5312% ( 65) 00:10:21.730 10366.604 - 10426.182: 94.9606% ( 61) 00:10:21.730 10426.182 - 10485.760: 95.4040% ( 63) 00:10:21.730 10485.760 - 10545.338: 95.8685% ( 66) 00:10:21.730 10545.338 - 10604.916: 96.2275% ( 51) 00:10:21.730 10604.916 - 10664.495: 96.5583% ( 47) 00:10:21.730 10664.495 - 10724.073: 96.8680% ( 44) 00:10:21.730 10724.073 - 10783.651: 97.1213% ( 36) 00:10:21.730 10783.651 - 10843.229: 97.3677% ( 35) 00:10:21.730 10843.229 - 10902.807: 97.5436% ( 25) 00:10:21.730 10902.807 - 10962.385: 97.6633% ( 17) 00:10:21.730 10962.385 - 11021.964: 97.7689% ( 15) 00:10:21.730 11021.964 - 11081.542: 97.8463% ( 11) 00:10:21.730 11081.542 - 11141.120: 97.9237% ( 11) 00:10:21.730 11141.120 - 11200.698: 97.9941% ( 10) 00:10:21.730 11200.698 - 11260.276: 98.0574% ( 9) 00:10:21.730 11260.276 - 11319.855: 98.0926% ( 5) 00:10:21.730 11319.855 - 11379.433: 98.1208% ( 4) 00:10:21.730 11379.433 - 11439.011: 98.1419% ( 3) 00:10:21.730 11439.011 - 11498.589: 98.1771% ( 5) 00:10:21.730 11498.589 - 11558.167: 98.2052% ( 4) 00:10:21.730 11558.167 - 11617.745: 98.2264% ( 3) 00:10:21.730 11617.745 - 11677.324: 98.2686% ( 6) 00:10:21.730 11677.324 - 11736.902: 98.2967% ( 4) 00:10:21.730 11736.902 - 11796.480: 98.3249% ( 4) 00:10:21.730 11796.480 - 11856.058: 98.3460% ( 3) 00:10:21.730 11856.058 - 11915.636: 98.3882% ( 6) 00:10:21.730 11915.636 - 11975.215: 98.4234% ( 5) 00:10:21.730 11975.215 - 12034.793: 98.4445% ( 3) 00:10:21.730 12034.793 - 12094.371: 98.4797% ( 5) 00:10:21.730 12094.371 - 12153.949: 98.5079% ( 4) 00:10:21.730 12153.949 - 12213.527: 98.5360% ( 4) 00:10:21.730 12213.527 - 12273.105: 98.5501% ( 2) 00:10:21.730 12273.105 - 12332.684: 98.5642% ( 2) 00:10:21.730 12332.684 - 12392.262: 98.5783% ( 2) 00:10:21.730 12392.262 - 12451.840: 98.5994% ( 3) 00:10:21.730 12451.840 - 12511.418: 98.6205% ( 3) 00:10:21.730 12511.418 - 12570.996: 98.6275% ( 1) 00:10:21.730 12570.996 - 12630.575: 98.6486% ( 3) 00:10:21.730 12630.575 - 12690.153: 98.6698% ( 3) 00:10:21.730 12690.153 - 12749.731: 98.6768% ( 1) 00:10:21.730 12749.731 - 12809.309: 98.6909% ( 2) 00:10:21.730 12809.309 - 12868.887: 98.6979% ( 1) 00:10:21.730 12868.887 - 12928.465: 98.7190% ( 3) 00:10:21.730 12928.465 - 12988.044: 98.7331% ( 2) 00:10:21.730 12988.044 - 13047.622: 98.7472% ( 2) 00:10:21.730 13047.622 - 13107.200: 98.7683% ( 3) 00:10:21.730 13107.200 - 13166.778: 98.7824% ( 2) 00:10:21.730 13166.778 - 13226.356: 98.7965% ( 2) 00:10:21.730 13226.356 - 13285.935: 98.8176% ( 3) 00:10:21.730 13285.935 - 13345.513: 98.8246% ( 1) 00:10:21.730 13345.513 - 13405.091: 98.8457% ( 3) 00:10:21.730 13405.091 - 13464.669: 98.8598% ( 2) 00:10:21.730 13464.669 - 13524.247: 98.8739% ( 2) 00:10:21.730 13524.247 - 13583.825: 98.8880% ( 2) 00:10:21.730 13583.825 - 13643.404: 98.9091% ( 3) 00:10:21.730 13643.404 - 13702.982: 98.9161% ( 1) 00:10:21.730 13702.982 - 13762.560: 98.9372% ( 3) 00:10:21.730 13762.560 - 13822.138: 98.9513% ( 2) 00:10:21.730 13822.138 - 13881.716: 98.9724% ( 3) 00:10:21.730 13881.716 - 13941.295: 98.9865% ( 2) 00:10:21.730 13941.295 - 14000.873: 99.0006% ( 2) 00:10:21.730 14000.873 - 14060.451: 99.0146% ( 2) 00:10:21.730 14060.451 - 14120.029: 99.0358% ( 3) 00:10:21.730 14120.029 - 14179.607: 99.0498% ( 2) 00:10:21.730 14179.607 - 14239.185: 99.0709% ( 3) 00:10:21.730 14298.764 - 14358.342: 99.0991% ( 4) 00:10:21.730 36700.160 - 36938.473: 99.1061% ( 1) 00:10:21.730 36938.473 - 37176.785: 99.1484% ( 6) 00:10:21.730 37176.785 - 37415.098: 99.1836% ( 5) 00:10:21.730 37415.098 - 37653.411: 99.2328% ( 7) 00:10:21.730 37653.411 - 37891.724: 99.2610% ( 4) 00:10:21.730 37891.724 - 38130.036: 99.3032% ( 6) 00:10:21.730 38130.036 - 38368.349: 99.3454% ( 6) 00:10:21.730 38368.349 - 38606.662: 99.3806% ( 5) 00:10:21.730 38606.662 - 38844.975: 99.3947% ( 2) 00:10:21.730 38844.975 - 39083.287: 99.4510% ( 8) 00:10:21.730 39083.287 - 39321.600: 99.4862% ( 5) 00:10:21.730 39321.600 - 39559.913: 99.5214% ( 5) 00:10:21.730 39559.913 - 39798.225: 99.5636% ( 6) 00:10:21.730 39798.225 - 40036.538: 99.6059% ( 6) 00:10:21.730 40036.538 - 40274.851: 99.6551% ( 7) 00:10:21.730 40274.851 - 40513.164: 99.6974% ( 6) 00:10:21.730 40513.164 - 40751.476: 99.7466% ( 7) 00:10:21.730 40751.476 - 40989.789: 99.7889% ( 6) 00:10:21.730 40989.789 - 41228.102: 99.8381% ( 7) 00:10:21.730 41228.102 - 41466.415: 99.8874% ( 7) 00:10:21.730 41466.415 - 41704.727: 99.9296% ( 6) 00:10:21.730 41704.727 - 41943.040: 99.9718% ( 6) 00:10:21.730 41943.040 - 42181.353: 100.0000% ( 4) 00:10:21.730 00:10:21.730 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:21.730 ============================================================================== 00:10:21.730 Range in us Cumulative IO count 00:10:21.731 6702.545 - 6732.335: 0.0070% ( 1) 00:10:21.731 6732.335 - 6762.124: 0.0352% ( 4) 00:10:21.731 6762.124 - 6791.913: 0.0422% ( 1) 00:10:21.731 6791.913 - 6821.702: 0.0563% ( 2) 00:10:21.731 6821.702 - 6851.491: 0.0774% ( 3) 00:10:21.731 6851.491 - 6881.280: 0.0985% ( 3) 00:10:21.731 6881.280 - 6911.069: 0.1478% ( 7) 00:10:21.731 6911.069 - 6940.858: 0.1760% ( 4) 00:10:21.731 6940.858 - 6970.647: 0.2041% ( 4) 00:10:21.731 6970.647 - 7000.436: 0.2323% ( 4) 00:10:21.731 7000.436 - 7030.225: 0.2745% ( 6) 00:10:21.731 7030.225 - 7060.015: 0.3097% ( 5) 00:10:21.731 7060.015 - 7089.804: 0.3378% ( 4) 00:10:21.731 7089.804 - 7119.593: 0.3730% ( 5) 00:10:21.731 7119.593 - 7149.382: 0.4082% ( 5) 00:10:21.731 7149.382 - 7179.171: 0.4434% ( 5) 00:10:21.731 7179.171 - 7208.960: 0.4786% ( 5) 00:10:21.731 7208.960 - 7238.749: 0.5419% ( 9) 00:10:21.731 7238.749 - 7268.538: 0.6264% ( 12) 00:10:21.731 7268.538 - 7298.327: 0.7320% ( 15) 00:10:21.731 7298.327 - 7328.116: 0.8657% ( 19) 00:10:21.731 7328.116 - 7357.905: 1.0276% ( 23) 00:10:21.731 7357.905 - 7387.695: 1.2528% ( 32) 00:10:21.731 7387.695 - 7417.484: 1.5414% ( 41) 00:10:21.731 7417.484 - 7447.273: 1.8722% ( 47) 00:10:21.731 7447.273 - 7477.062: 2.2452% ( 53) 00:10:21.731 7477.062 - 7506.851: 2.7238% ( 68) 00:10:21.731 7506.851 - 7536.640: 3.2447% ( 74) 00:10:21.731 7536.640 - 7566.429: 3.9485% ( 100) 00:10:21.731 7566.429 - 7596.218: 4.6242% ( 96) 00:10:21.731 7596.218 - 7626.007: 5.3491% ( 103) 00:10:21.731 7626.007 - 7685.585: 6.9116% ( 222) 00:10:21.731 7685.585 - 7745.164: 8.6501% ( 247) 00:10:21.731 7745.164 - 7804.742: 10.5293% ( 267) 00:10:21.731 7804.742 - 7864.320: 12.5493% ( 287) 00:10:21.731 7864.320 - 7923.898: 14.6748% ( 302) 00:10:21.731 7923.898 - 7983.476: 16.9200% ( 319) 00:10:21.731 7983.476 - 8043.055: 19.2990% ( 338) 00:10:21.731 8043.055 - 8102.633: 21.6357% ( 332) 00:10:21.731 8102.633 - 8162.211: 24.0358% ( 341) 00:10:21.731 8162.211 - 8221.789: 26.5273% ( 354) 00:10:21.731 8221.789 - 8281.367: 29.0752% ( 362) 00:10:21.731 8281.367 - 8340.945: 31.7005% ( 373) 00:10:21.731 8340.945 - 8400.524: 34.3750% ( 380) 00:10:21.731 8400.524 - 8460.102: 37.2396% ( 407) 00:10:21.731 8460.102 - 8519.680: 40.0267% ( 396) 00:10:21.731 8519.680 - 8579.258: 42.9195% ( 411) 00:10:21.731 8579.258 - 8638.836: 45.8193% ( 412) 00:10:21.731 8638.836 - 8698.415: 48.5923% ( 394) 00:10:21.731 8698.415 - 8757.993: 51.5132% ( 415) 00:10:21.731 8757.993 - 8817.571: 54.3637% ( 405) 00:10:21.731 8817.571 - 8877.149: 57.2494% ( 410) 00:10:21.731 8877.149 - 8936.727: 60.1914% ( 418) 00:10:21.731 8936.727 - 8996.305: 63.0983% ( 413) 00:10:21.731 8996.305 - 9055.884: 65.8854% ( 396) 00:10:21.731 9055.884 - 9115.462: 68.7993% ( 414) 00:10:21.731 9115.462 - 9175.040: 71.5583% ( 392) 00:10:21.731 9175.040 - 9234.618: 74.2328% ( 380) 00:10:21.731 9234.618 - 9294.196: 76.7666% ( 360) 00:10:21.731 9294.196 - 9353.775: 79.1526% ( 339) 00:10:21.731 9353.775 - 9413.353: 81.2500% ( 298) 00:10:21.731 9413.353 - 9472.931: 83.2489% ( 284) 00:10:21.731 9472.931 - 9532.509: 85.1070% ( 264) 00:10:21.731 9532.509 - 9592.087: 86.6695% ( 222) 00:10:21.731 9592.087 - 9651.665: 87.9786% ( 186) 00:10:21.731 9651.665 - 9711.244: 89.0907% ( 158) 00:10:21.731 9711.244 - 9770.822: 89.9845% ( 127) 00:10:21.731 9770.822 - 9830.400: 90.7517% ( 109) 00:10:21.731 9830.400 - 9889.978: 91.4907% ( 105) 00:10:21.731 9889.978 - 9949.556: 92.2016% ( 101) 00:10:21.731 9949.556 - 10009.135: 92.8139% ( 87) 00:10:21.731 10009.135 - 10068.713: 93.4051% ( 84) 00:10:21.731 10068.713 - 10128.291: 93.9330% ( 75) 00:10:21.731 10128.291 - 10187.869: 94.4961% ( 80) 00:10:21.731 10187.869 - 10247.447: 95.0380% ( 77) 00:10:21.731 10247.447 - 10307.025: 95.5448% ( 72) 00:10:21.731 10307.025 - 10366.604: 95.9882% ( 63) 00:10:21.731 10366.604 - 10426.182: 96.3753% ( 55) 00:10:21.731 10426.182 - 10485.760: 96.7342% ( 51) 00:10:21.731 10485.760 - 10545.338: 97.0791% ( 49) 00:10:21.731 10545.338 - 10604.916: 97.2973% ( 31) 00:10:21.731 10604.916 - 10664.495: 97.4521% ( 22) 00:10:21.731 10664.495 - 10724.073: 97.5366% ( 12) 00:10:21.731 10724.073 - 10783.651: 97.5999% ( 9) 00:10:21.731 10783.651 - 10843.229: 97.6492% ( 7) 00:10:21.731 10843.229 - 10902.807: 97.7196% ( 10) 00:10:21.731 10902.807 - 10962.385: 97.7829% ( 9) 00:10:21.731 10962.385 - 11021.964: 97.8252% ( 6) 00:10:21.731 11021.964 - 11081.542: 97.8604% ( 5) 00:10:21.731 11081.542 - 11141.120: 97.9026% ( 6) 00:10:21.731 11141.120 - 11200.698: 97.9378% ( 5) 00:10:21.731 11200.698 - 11260.276: 97.9800% ( 6) 00:10:21.731 11260.276 - 11319.855: 98.0152% ( 5) 00:10:21.731 11319.855 - 11379.433: 98.0574% ( 6) 00:10:21.731 11379.433 - 11439.011: 98.0926% ( 5) 00:10:21.731 11439.011 - 11498.589: 98.1278% ( 5) 00:10:21.731 11498.589 - 11558.167: 98.1630% ( 5) 00:10:21.731 11558.167 - 11617.745: 98.1771% ( 2) 00:10:21.731 11617.745 - 11677.324: 98.1912% ( 2) 00:10:21.731 11677.324 - 11736.902: 98.1982% ( 1) 00:10:21.731 12273.105 - 12332.684: 98.2052% ( 1) 00:10:21.731 12332.684 - 12392.262: 98.2264% ( 3) 00:10:21.731 12392.262 - 12451.840: 98.2404% ( 2) 00:10:21.731 12451.840 - 12511.418: 98.2615% ( 3) 00:10:21.731 12511.418 - 12570.996: 98.2756% ( 2) 00:10:21.731 12570.996 - 12630.575: 98.2967% ( 3) 00:10:21.731 12630.575 - 12690.153: 98.3249% ( 4) 00:10:21.731 12690.153 - 12749.731: 98.3390% ( 2) 00:10:21.731 12749.731 - 12809.309: 98.3530% ( 2) 00:10:21.731 12809.309 - 12868.887: 98.3671% ( 2) 00:10:21.731 12868.887 - 12928.465: 98.3882% ( 3) 00:10:21.731 12928.465 - 12988.044: 98.4093% ( 3) 00:10:21.731 12988.044 - 13047.622: 98.4234% ( 2) 00:10:21.731 13047.622 - 13107.200: 98.4445% ( 3) 00:10:21.731 13107.200 - 13166.778: 98.4586% ( 2) 00:10:21.731 13166.778 - 13226.356: 98.4797% ( 3) 00:10:21.731 13226.356 - 13285.935: 98.4938% ( 2) 00:10:21.731 13285.935 - 13345.513: 98.5079% ( 2) 00:10:21.731 13345.513 - 13405.091: 98.5290% ( 3) 00:10:21.731 13405.091 - 13464.669: 98.5431% ( 2) 00:10:21.731 13464.669 - 13524.247: 98.5572% ( 2) 00:10:21.731 13524.247 - 13583.825: 98.5783% ( 3) 00:10:21.731 13583.825 - 13643.404: 98.5923% ( 2) 00:10:21.731 13643.404 - 13702.982: 98.6064% ( 2) 00:10:21.731 13702.982 - 13762.560: 98.6275% ( 3) 00:10:21.731 13762.560 - 13822.138: 98.6486% ( 3) 00:10:21.731 13822.138 - 13881.716: 98.6627% ( 2) 00:10:21.731 13881.716 - 13941.295: 98.6838% ( 3) 00:10:21.731 13941.295 - 14000.873: 98.7050% ( 3) 00:10:21.731 14000.873 - 14060.451: 98.7261% ( 3) 00:10:21.731 14060.451 - 14120.029: 98.7401% ( 2) 00:10:21.731 14120.029 - 14179.607: 98.7542% ( 2) 00:10:21.731 14179.607 - 14239.185: 98.7753% ( 3) 00:10:21.731 14239.185 - 14298.764: 98.7894% ( 2) 00:10:21.731 14298.764 - 14358.342: 98.8105% ( 3) 00:10:21.731 14358.342 - 14417.920: 98.8246% ( 2) 00:10:21.731 14417.920 - 14477.498: 98.8457% ( 3) 00:10:21.731 14477.498 - 14537.076: 98.8668% ( 3) 00:10:21.731 14537.076 - 14596.655: 98.8809% ( 2) 00:10:21.731 14596.655 - 14656.233: 98.9020% ( 3) 00:10:21.731 14656.233 - 14715.811: 98.9231% ( 3) 00:10:21.731 14715.811 - 14775.389: 98.9443% ( 3) 00:10:21.731 14775.389 - 14834.967: 98.9583% ( 2) 00:10:21.731 14834.967 - 14894.545: 98.9794% ( 3) 00:10:21.731 14894.545 - 14954.124: 99.0006% ( 3) 00:10:21.731 14954.124 - 15013.702: 99.0146% ( 2) 00:10:21.731 15013.702 - 15073.280: 99.0358% ( 3) 00:10:21.731 15073.280 - 15132.858: 99.0498% ( 2) 00:10:21.731 15132.858 - 15192.436: 99.0709% ( 3) 00:10:21.731 15192.436 - 15252.015: 99.0921% ( 3) 00:10:21.731 15252.015 - 15371.171: 99.0991% ( 1) 00:10:21.731 35270.284 - 35508.596: 99.1273% ( 4) 00:10:21.731 35508.596 - 35746.909: 99.1695% ( 6) 00:10:21.731 35746.909 - 35985.222: 99.2117% ( 6) 00:10:21.731 35985.222 - 36223.535: 99.2539% ( 6) 00:10:21.731 36223.535 - 36461.847: 99.2962% ( 6) 00:10:21.731 36461.847 - 36700.160: 99.3454% ( 7) 00:10:21.731 36700.160 - 36938.473: 99.3877% ( 6) 00:10:21.731 36938.473 - 37176.785: 99.4299% ( 6) 00:10:21.731 37176.785 - 37415.098: 99.4721% ( 6) 00:10:21.731 37415.098 - 37653.411: 99.5144% ( 6) 00:10:21.731 37653.411 - 37891.724: 99.5566% ( 6) 00:10:21.731 37891.724 - 38130.036: 99.5988% ( 6) 00:10:21.731 38130.036 - 38368.349: 99.6410% ( 6) 00:10:21.731 38368.349 - 38606.662: 99.6833% ( 6) 00:10:21.731 38606.662 - 38844.975: 99.7325% ( 7) 00:10:21.731 38844.975 - 39083.287: 99.7748% ( 6) 00:10:21.731 39083.287 - 39321.600: 99.8170% ( 6) 00:10:21.731 39321.600 - 39559.913: 99.8592% ( 6) 00:10:21.731 39559.913 - 39798.225: 99.9085% ( 7) 00:10:21.731 39798.225 - 40036.538: 99.9437% ( 5) 00:10:21.731 40036.538 - 40274.851: 99.9859% ( 6) 00:10:21.731 40274.851 - 40513.164: 100.0000% ( 2) 00:10:21.731 00:10:21.731 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:21.731 ============================================================================== 00:10:21.731 Range in us Cumulative IO count 00:10:21.731 6970.647 - 7000.436: 0.0070% ( 1) 00:10:21.731 7000.436 - 7030.225: 0.0211% ( 2) 00:10:21.731 7030.225 - 7060.015: 0.0352% ( 2) 00:10:21.731 7060.015 - 7089.804: 0.0704% ( 5) 00:10:21.731 7089.804 - 7119.593: 0.0985% ( 4) 00:10:21.731 7119.593 - 7149.382: 0.1337% ( 5) 00:10:21.731 7149.382 - 7179.171: 0.1689% ( 5) 00:10:21.731 7179.171 - 7208.960: 0.2323% ( 9) 00:10:21.731 7208.960 - 7238.749: 0.3519% ( 17) 00:10:21.731 7238.749 - 7268.538: 0.4997% ( 21) 00:10:21.731 7268.538 - 7298.327: 0.6264% ( 18) 00:10:21.731 7298.327 - 7328.116: 0.7601% ( 19) 00:10:21.731 7328.116 - 7357.905: 0.9431% ( 26) 00:10:21.731 7357.905 - 7387.695: 1.2035% ( 37) 00:10:21.731 7387.695 - 7417.484: 1.5132% ( 44) 00:10:21.731 7417.484 - 7447.273: 1.8511% ( 48) 00:10:21.731 7447.273 - 7477.062: 2.2945% ( 63) 00:10:21.731 7477.062 - 7506.851: 2.7379% ( 63) 00:10:21.732 7506.851 - 7536.640: 3.3713% ( 90) 00:10:21.732 7536.640 - 7566.429: 3.9977% ( 89) 00:10:21.732 7566.429 - 7596.218: 4.6945% ( 99) 00:10:21.732 7596.218 - 7626.007: 5.3632% ( 95) 00:10:21.732 7626.007 - 7685.585: 6.9609% ( 227) 00:10:21.732 7685.585 - 7745.164: 8.6712% ( 243) 00:10:21.732 7745.164 - 7804.742: 10.6560% ( 282) 00:10:21.732 7804.742 - 7864.320: 12.6760% ( 287) 00:10:21.732 7864.320 - 7923.898: 14.8226% ( 305) 00:10:21.732 7923.898 - 7983.476: 17.0397% ( 315) 00:10:21.732 7983.476 - 8043.055: 19.3764% ( 332) 00:10:21.732 8043.055 - 8102.633: 21.6850% ( 328) 00:10:21.732 8102.633 - 8162.211: 24.0358% ( 334) 00:10:21.732 8162.211 - 8221.789: 26.4710% ( 346) 00:10:21.732 8221.789 - 8281.367: 29.0681% ( 369) 00:10:21.732 8281.367 - 8340.945: 31.7286% ( 378) 00:10:21.732 8340.945 - 8400.524: 34.4524% ( 387) 00:10:21.732 8400.524 - 8460.102: 37.2325% ( 395) 00:10:21.732 8460.102 - 8519.680: 39.9986% ( 393) 00:10:21.732 8519.680 - 8579.258: 42.7858% ( 396) 00:10:21.732 8579.258 - 8638.836: 45.6363% ( 405) 00:10:21.732 8638.836 - 8698.415: 48.4093% ( 394) 00:10:21.732 8698.415 - 8757.993: 51.2387% ( 402) 00:10:21.732 8757.993 - 8817.571: 54.0752% ( 403) 00:10:21.732 8817.571 - 8877.149: 56.9327% ( 406) 00:10:21.732 8877.149 - 8936.727: 59.8043% ( 408) 00:10:21.732 8936.727 - 8996.305: 62.6408% ( 403) 00:10:21.732 8996.305 - 9055.884: 65.4139% ( 394) 00:10:21.732 9055.884 - 9115.462: 68.3277% ( 414) 00:10:21.732 9115.462 - 9175.040: 71.1008% ( 394) 00:10:21.732 9175.040 - 9234.618: 73.8176% ( 386) 00:10:21.732 9234.618 - 9294.196: 76.2880% ( 351) 00:10:21.732 9294.196 - 9353.775: 78.5473% ( 321) 00:10:21.732 9353.775 - 9413.353: 80.7081% ( 307) 00:10:21.732 9413.353 - 9472.931: 82.7280% ( 287) 00:10:21.732 9472.931 - 9532.509: 84.4876% ( 250) 00:10:21.732 9532.509 - 9592.087: 86.1205% ( 232) 00:10:21.732 9592.087 - 9651.665: 87.4789% ( 193) 00:10:21.732 9651.665 - 9711.244: 88.7106% ( 175) 00:10:21.732 9711.244 - 9770.822: 89.7030% ( 141) 00:10:21.732 9770.822 - 9830.400: 90.5898% ( 126) 00:10:21.732 9830.400 - 9889.978: 91.3781% ( 112) 00:10:21.732 9889.978 - 9949.556: 92.1242% ( 106) 00:10:21.732 9949.556 - 10009.135: 92.7435% ( 88) 00:10:21.732 10009.135 - 10068.713: 93.3347% ( 84) 00:10:21.732 10068.713 - 10128.291: 93.8697% ( 76) 00:10:21.732 10128.291 - 10187.869: 94.4327% ( 80) 00:10:21.732 10187.869 - 10247.447: 94.9254% ( 70) 00:10:21.732 10247.447 - 10307.025: 95.4110% ( 69) 00:10:21.732 10307.025 - 10366.604: 95.8896% ( 68) 00:10:21.732 10366.604 - 10426.182: 96.2838% ( 56) 00:10:21.732 10426.182 - 10485.760: 96.6357% ( 50) 00:10:21.732 10485.760 - 10545.338: 96.9383% ( 43) 00:10:21.732 10545.338 - 10604.916: 97.1565% ( 31) 00:10:21.732 10604.916 - 10664.495: 97.3325% ( 25) 00:10:21.732 10664.495 - 10724.073: 97.4733% ( 20) 00:10:21.732 10724.073 - 10783.651: 97.5648% ( 13) 00:10:21.732 10783.651 - 10843.229: 97.6351% ( 10) 00:10:21.732 10843.229 - 10902.807: 97.6844% ( 7) 00:10:21.732 10902.807 - 10962.385: 97.7477% ( 9) 00:10:21.732 10962.385 - 11021.964: 97.8041% ( 8) 00:10:21.732 11021.964 - 11081.542: 97.8674% ( 9) 00:10:21.732 11081.542 - 11141.120: 97.9096% ( 6) 00:10:21.732 11141.120 - 11200.698: 97.9589% ( 7) 00:10:21.732 11200.698 - 11260.276: 98.0082% ( 7) 00:10:21.732 11260.276 - 11319.855: 98.0645% ( 8) 00:10:21.732 11319.855 - 11379.433: 98.1208% ( 8) 00:10:21.732 11379.433 - 11439.011: 98.1700% ( 7) 00:10:21.732 11439.011 - 11498.589: 98.2264% ( 8) 00:10:21.732 11498.589 - 11558.167: 98.2756% ( 7) 00:10:21.732 11558.167 - 11617.745: 98.3108% ( 5) 00:10:21.732 11617.745 - 11677.324: 98.3390% ( 4) 00:10:21.732 11677.324 - 11736.902: 98.3601% ( 3) 00:10:21.732 11736.902 - 11796.480: 98.3742% ( 2) 00:10:21.732 11796.480 - 11856.058: 98.3953% ( 3) 00:10:21.732 11856.058 - 11915.636: 98.4093% ( 2) 00:10:21.732 11915.636 - 11975.215: 98.4234% ( 2) 00:10:21.732 11975.215 - 12034.793: 98.4445% ( 3) 00:10:21.732 12034.793 - 12094.371: 98.4586% ( 2) 00:10:21.732 12094.371 - 12153.949: 98.4797% ( 3) 00:10:21.732 12153.949 - 12213.527: 98.5008% ( 3) 00:10:21.732 12213.527 - 12273.105: 98.5149% ( 2) 00:10:21.732 12273.105 - 12332.684: 98.5360% ( 3) 00:10:21.732 12332.684 - 12392.262: 98.5501% ( 2) 00:10:21.732 12392.262 - 12451.840: 98.5642% ( 2) 00:10:21.732 12451.840 - 12511.418: 98.5783% ( 2) 00:10:21.732 12511.418 - 12570.996: 98.5994% ( 3) 00:10:21.732 12570.996 - 12630.575: 98.6205% ( 3) 00:10:21.732 12630.575 - 12690.153: 98.6346% ( 2) 00:10:21.732 12690.153 - 12749.731: 98.6486% ( 2) 00:10:21.732 12749.731 - 12809.309: 98.6698% ( 3) 00:10:21.732 12809.309 - 12868.887: 98.6838% ( 2) 00:10:21.732 12868.887 - 12928.465: 98.7050% ( 3) 00:10:21.732 12928.465 - 12988.044: 98.7190% ( 2) 00:10:21.732 12988.044 - 13047.622: 98.7401% ( 3) 00:10:21.732 13047.622 - 13107.200: 98.7542% ( 2) 00:10:21.732 13107.200 - 13166.778: 98.7683% ( 2) 00:10:21.732 13166.778 - 13226.356: 98.7824% ( 2) 00:10:21.732 13226.356 - 13285.935: 98.8035% ( 3) 00:10:21.732 13285.935 - 13345.513: 98.8176% ( 2) 00:10:21.732 13345.513 - 13405.091: 98.8316% ( 2) 00:10:21.732 13405.091 - 13464.669: 98.8528% ( 3) 00:10:21.732 13464.669 - 13524.247: 98.8668% ( 2) 00:10:21.732 13524.247 - 13583.825: 98.8809% ( 2) 00:10:21.732 13583.825 - 13643.404: 98.9020% ( 3) 00:10:21.732 13643.404 - 13702.982: 98.9231% ( 3) 00:10:21.732 13702.982 - 13762.560: 98.9372% ( 2) 00:10:21.732 13762.560 - 13822.138: 98.9513% ( 2) 00:10:21.732 13822.138 - 13881.716: 98.9724% ( 3) 00:10:21.732 13881.716 - 13941.295: 98.9865% ( 2) 00:10:21.732 13941.295 - 14000.873: 99.0076% ( 3) 00:10:21.732 14000.873 - 14060.451: 99.0217% ( 2) 00:10:21.732 14060.451 - 14120.029: 99.0358% ( 2) 00:10:21.732 14120.029 - 14179.607: 99.0569% ( 3) 00:10:21.732 14179.607 - 14239.185: 99.0709% ( 2) 00:10:21.732 14239.185 - 14298.764: 99.0850% ( 2) 00:10:21.732 14298.764 - 14358.342: 99.0921% ( 1) 00:10:21.732 14358.342 - 14417.920: 99.0991% ( 1) 00:10:21.732 34078.720 - 34317.033: 99.1132% ( 2) 00:10:21.732 34317.033 - 34555.345: 99.1484% ( 5) 00:10:21.732 34555.345 - 34793.658: 99.1906% ( 6) 00:10:21.732 34793.658 - 35031.971: 99.2328% ( 6) 00:10:21.732 35031.971 - 35270.284: 99.2751% ( 6) 00:10:21.732 35270.284 - 35508.596: 99.3243% ( 7) 00:10:21.732 35508.596 - 35746.909: 99.3666% ( 6) 00:10:21.732 35746.909 - 35985.222: 99.4088% ( 6) 00:10:21.732 35985.222 - 36223.535: 99.4440% ( 5) 00:10:21.732 36223.535 - 36461.847: 99.4862% ( 6) 00:10:21.732 36461.847 - 36700.160: 99.5284% ( 6) 00:10:21.732 36700.160 - 36938.473: 99.5777% ( 7) 00:10:21.732 36938.473 - 37176.785: 99.6129% ( 5) 00:10:21.732 37176.785 - 37415.098: 99.6551% ( 6) 00:10:21.732 37415.098 - 37653.411: 99.6974% ( 6) 00:10:21.732 37653.411 - 37891.724: 99.7466% ( 7) 00:10:21.732 37891.724 - 38130.036: 99.7889% ( 6) 00:10:21.732 38130.036 - 38368.349: 99.8311% ( 6) 00:10:21.732 38368.349 - 38606.662: 99.8803% ( 7) 00:10:21.732 38606.662 - 38844.975: 99.9226% ( 6) 00:10:21.732 38844.975 - 39083.287: 99.9648% ( 6) 00:10:21.732 39083.287 - 39321.600: 100.0000% ( 5) 00:10:21.732 00:10:21.732 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:21.732 ============================================================================== 00:10:21.732 Range in us Cumulative IO count 00:10:21.732 6911.069 - 6940.858: 0.0070% ( 1) 00:10:21.732 6940.858 - 6970.647: 0.0352% ( 4) 00:10:21.732 6970.647 - 7000.436: 0.0633% ( 4) 00:10:21.732 7000.436 - 7030.225: 0.0774% ( 2) 00:10:21.732 7030.225 - 7060.015: 0.0985% ( 3) 00:10:21.732 7060.015 - 7089.804: 0.1126% ( 2) 00:10:21.732 7089.804 - 7119.593: 0.1478% ( 5) 00:10:21.732 7119.593 - 7149.382: 0.1900% ( 6) 00:10:21.732 7149.382 - 7179.171: 0.2323% ( 6) 00:10:21.732 7179.171 - 7208.960: 0.2745% ( 6) 00:10:21.732 7208.960 - 7238.749: 0.3308% ( 8) 00:10:21.732 7238.749 - 7268.538: 0.4223% ( 13) 00:10:21.732 7268.538 - 7298.327: 0.4927% ( 10) 00:10:21.732 7298.327 - 7328.116: 0.5842% ( 13) 00:10:21.732 7328.116 - 7357.905: 0.7390% ( 22) 00:10:21.732 7357.905 - 7387.695: 1.0206% ( 40) 00:10:21.732 7387.695 - 7417.484: 1.2950% ( 39) 00:10:21.732 7417.484 - 7447.273: 1.6329% ( 48) 00:10:21.732 7447.273 - 7477.062: 1.9848% ( 50) 00:10:21.732 7477.062 - 7506.851: 2.4423% ( 65) 00:10:21.732 7506.851 - 7536.640: 2.9772% ( 76) 00:10:21.732 7536.640 - 7566.429: 3.6247% ( 92) 00:10:21.732 7566.429 - 7596.218: 4.3285% ( 100) 00:10:21.732 7596.218 - 7626.007: 5.0816% ( 107) 00:10:21.732 7626.007 - 7685.585: 6.7427% ( 236) 00:10:21.732 7685.585 - 7745.164: 8.5304% ( 254) 00:10:21.732 7745.164 - 7804.742: 10.5222% ( 283) 00:10:21.732 7804.742 - 7864.320: 12.6408% ( 301) 00:10:21.732 7864.320 - 7923.898: 14.7523% ( 300) 00:10:21.732 7923.898 - 7983.476: 17.0678% ( 329) 00:10:21.732 7983.476 - 8043.055: 19.3834% ( 329) 00:10:21.732 8043.055 - 8102.633: 21.6990% ( 329) 00:10:21.732 8102.633 - 8162.211: 24.1061% ( 342) 00:10:21.732 8162.211 - 8221.789: 26.6470% ( 361) 00:10:21.732 8221.789 - 8281.367: 29.2230% ( 366) 00:10:21.732 8281.367 - 8340.945: 31.9046% ( 381) 00:10:21.732 8340.945 - 8400.524: 34.4947% ( 368) 00:10:21.732 8400.524 - 8460.102: 37.2466% ( 391) 00:10:21.732 8460.102 - 8519.680: 39.9564% ( 385) 00:10:21.732 8519.680 - 8579.258: 42.7506% ( 397) 00:10:21.732 8579.258 - 8638.836: 45.5729% ( 401) 00:10:21.732 8638.836 - 8698.415: 48.3390% ( 393) 00:10:21.732 8698.415 - 8757.993: 51.1613% ( 401) 00:10:21.732 8757.993 - 8817.571: 54.0400% ( 409) 00:10:21.732 8817.571 - 8877.149: 56.8764% ( 403) 00:10:21.732 8877.149 - 8936.727: 59.6917% ( 400) 00:10:21.733 8936.727 - 8996.305: 62.5352% ( 404) 00:10:21.733 8996.305 - 9055.884: 65.4209% ( 410) 00:10:21.733 9055.884 - 9115.462: 68.1729% ( 391) 00:10:21.733 9115.462 - 9175.040: 70.9530% ( 395) 00:10:21.733 9175.040 - 9234.618: 73.6064% ( 377) 00:10:21.733 9234.618 - 9294.196: 76.1824% ( 366) 00:10:21.733 9294.196 - 9353.775: 78.4769% ( 326) 00:10:21.733 9353.775 - 9413.353: 80.5602% ( 296) 00:10:21.733 9413.353 - 9472.931: 82.4747% ( 272) 00:10:21.733 9472.931 - 9532.509: 84.2131% ( 247) 00:10:21.733 9532.509 - 9592.087: 85.7615% ( 220) 00:10:21.733 9592.087 - 9651.665: 87.0918% ( 189) 00:10:21.733 9651.665 - 9711.244: 88.3376% ( 177) 00:10:21.733 9711.244 - 9770.822: 89.3159% ( 139) 00:10:21.733 9770.822 - 9830.400: 90.2097% ( 127) 00:10:21.733 9830.400 - 9889.978: 91.0121% ( 114) 00:10:21.733 9889.978 - 9949.556: 91.6596% ( 92) 00:10:21.733 9949.556 - 10009.135: 92.2157% ( 79) 00:10:21.733 10009.135 - 10068.713: 92.7506% ( 76) 00:10:21.733 10068.713 - 10128.291: 93.2644% ( 73) 00:10:21.733 10128.291 - 10187.869: 93.7993% ( 76) 00:10:21.733 10187.869 - 10247.447: 94.3060% ( 72) 00:10:21.733 10247.447 - 10307.025: 94.8128% ( 72) 00:10:21.733 10307.025 - 10366.604: 95.3125% ( 71) 00:10:21.733 10366.604 - 10426.182: 95.7348% ( 60) 00:10:21.733 10426.182 - 10485.760: 96.1078% ( 53) 00:10:21.733 10485.760 - 10545.338: 96.4527% ( 49) 00:10:21.733 10545.338 - 10604.916: 96.6779% ( 32) 00:10:21.733 10604.916 - 10664.495: 96.8398% ( 23) 00:10:21.733 10664.495 - 10724.073: 96.9876% ( 21) 00:10:21.733 10724.073 - 10783.651: 97.1143% ( 18) 00:10:21.733 10783.651 - 10843.229: 97.2340% ( 17) 00:10:21.733 10843.229 - 10902.807: 97.3466% ( 16) 00:10:21.733 10902.807 - 10962.385: 97.4733% ( 18) 00:10:21.733 10962.385 - 11021.964: 97.5648% ( 13) 00:10:21.733 11021.964 - 11081.542: 97.6703% ( 15) 00:10:21.733 11081.542 - 11141.120: 97.7618% ( 13) 00:10:21.733 11141.120 - 11200.698: 97.8674% ( 15) 00:10:21.733 11200.698 - 11260.276: 97.9519% ( 12) 00:10:21.733 11260.276 - 11319.855: 98.0574% ( 15) 00:10:21.733 11319.855 - 11379.433: 98.1349% ( 11) 00:10:21.733 11379.433 - 11439.011: 98.2193% ( 12) 00:10:21.733 11439.011 - 11498.589: 98.2686% ( 7) 00:10:21.733 11498.589 - 11558.167: 98.3038% ( 5) 00:10:21.733 11558.167 - 11617.745: 98.3390% ( 5) 00:10:21.733 11617.745 - 11677.324: 98.3742% ( 5) 00:10:21.733 11677.324 - 11736.902: 98.4093% ( 5) 00:10:21.733 11736.902 - 11796.480: 98.4445% ( 5) 00:10:21.733 11796.480 - 11856.058: 98.4727% ( 4) 00:10:21.733 11856.058 - 11915.636: 98.5079% ( 5) 00:10:21.733 11915.636 - 11975.215: 98.5431% ( 5) 00:10:21.733 11975.215 - 12034.793: 98.5783% ( 5) 00:10:21.733 12034.793 - 12094.371: 98.6135% ( 5) 00:10:21.733 12094.371 - 12153.949: 98.6557% ( 6) 00:10:21.733 12153.949 - 12213.527: 98.6838% ( 4) 00:10:21.733 12213.527 - 12273.105: 98.7190% ( 5) 00:10:21.733 12273.105 - 12332.684: 98.7472% ( 4) 00:10:21.733 12332.684 - 12392.262: 98.7824% ( 5) 00:10:21.733 12392.262 - 12451.840: 98.8035% ( 3) 00:10:21.733 12451.840 - 12511.418: 98.8246% ( 3) 00:10:21.733 12511.418 - 12570.996: 98.8457% ( 3) 00:10:21.733 12570.996 - 12630.575: 98.8598% ( 2) 00:10:21.733 12630.575 - 12690.153: 98.8809% ( 3) 00:10:21.733 12690.153 - 12749.731: 98.8950% ( 2) 00:10:21.733 12749.731 - 12809.309: 98.9091% ( 2) 00:10:21.733 12809.309 - 12868.887: 98.9302% ( 3) 00:10:21.733 12868.887 - 12928.465: 98.9443% ( 2) 00:10:21.733 12928.465 - 12988.044: 98.9654% ( 3) 00:10:21.733 12988.044 - 13047.622: 98.9865% ( 3) 00:10:21.733 13047.622 - 13107.200: 99.0006% ( 2) 00:10:21.733 13107.200 - 13166.778: 99.0146% ( 2) 00:10:21.733 13166.778 - 13226.356: 99.0358% ( 3) 00:10:21.733 13226.356 - 13285.935: 99.0569% ( 3) 00:10:21.733 13285.935 - 13345.513: 99.0639% ( 1) 00:10:21.733 13345.513 - 13405.091: 99.0850% ( 3) 00:10:21.733 13405.091 - 13464.669: 99.0991% ( 2) 00:10:21.733 32410.531 - 32648.844: 99.1343% ( 5) 00:10:21.733 32648.844 - 32887.156: 99.1765% ( 6) 00:10:21.733 32887.156 - 33125.469: 99.2258% ( 7) 00:10:21.733 33125.469 - 33363.782: 99.2680% ( 6) 00:10:21.733 33363.782 - 33602.095: 99.3102% ( 6) 00:10:21.733 33602.095 - 33840.407: 99.3595% ( 7) 00:10:21.733 33840.407 - 34078.720: 99.4017% ( 6) 00:10:21.733 34078.720 - 34317.033: 99.4440% ( 6) 00:10:21.733 34317.033 - 34555.345: 99.4862% ( 6) 00:10:21.733 34555.345 - 34793.658: 99.5284% ( 6) 00:10:21.733 34793.658 - 35031.971: 99.5707% ( 6) 00:10:21.733 35031.971 - 35270.284: 99.6129% ( 6) 00:10:21.733 35270.284 - 35508.596: 99.6551% ( 6) 00:10:21.733 35508.596 - 35746.909: 99.7044% ( 7) 00:10:21.733 35746.909 - 35985.222: 99.7466% ( 6) 00:10:21.733 35985.222 - 36223.535: 99.7889% ( 6) 00:10:21.733 36223.535 - 36461.847: 99.8381% ( 7) 00:10:21.733 36461.847 - 36700.160: 99.8803% ( 6) 00:10:21.733 36700.160 - 36938.473: 99.9296% ( 7) 00:10:21.733 36938.473 - 37176.785: 99.9718% ( 6) 00:10:21.733 37176.785 - 37415.098: 100.0000% ( 4) 00:10:21.733 00:10:21.733 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:21.733 ============================================================================== 00:10:21.733 Range in us Cumulative IO count 00:10:21.733 6762.124 - 6791.913: 0.0070% ( 1) 00:10:21.733 6791.913 - 6821.702: 0.0209% ( 2) 00:10:21.733 6821.702 - 6851.491: 0.0349% ( 2) 00:10:21.733 6851.491 - 6881.280: 0.0488% ( 2) 00:10:21.733 6881.280 - 6911.069: 0.0767% ( 4) 00:10:21.733 6911.069 - 6940.858: 0.0907% ( 2) 00:10:21.733 6940.858 - 6970.647: 0.1116% ( 3) 00:10:21.733 6970.647 - 7000.436: 0.1395% ( 4) 00:10:21.733 7000.436 - 7030.225: 0.1535% ( 2) 00:10:21.733 7030.225 - 7060.015: 0.1814% ( 4) 00:10:21.733 7060.015 - 7089.804: 0.2023% ( 3) 00:10:21.733 7089.804 - 7119.593: 0.2302% ( 4) 00:10:21.733 7119.593 - 7149.382: 0.2581% ( 4) 00:10:21.733 7149.382 - 7179.171: 0.2999% ( 6) 00:10:21.733 7179.171 - 7208.960: 0.3488% ( 7) 00:10:21.733 7208.960 - 7238.749: 0.3976% ( 7) 00:10:21.733 7238.749 - 7268.538: 0.4743% ( 11) 00:10:21.733 7268.538 - 7298.327: 0.5650% ( 13) 00:10:21.733 7298.327 - 7328.116: 0.6627% ( 14) 00:10:21.733 7328.116 - 7357.905: 0.8022% ( 20) 00:10:21.733 7357.905 - 7387.695: 0.9766% ( 25) 00:10:21.733 7387.695 - 7417.484: 1.1858% ( 30) 00:10:21.733 7417.484 - 7447.273: 1.4648% ( 40) 00:10:21.733 7447.273 - 7477.062: 1.8136% ( 50) 00:10:21.733 7477.062 - 7506.851: 2.2391% ( 61) 00:10:21.733 7506.851 - 7536.640: 2.6995% ( 66) 00:10:21.733 7536.640 - 7566.429: 3.2785% ( 83) 00:10:21.733 7566.429 - 7596.218: 3.9900% ( 102) 00:10:21.733 7596.218 - 7626.007: 4.7154% ( 104) 00:10:21.733 7626.007 - 7685.585: 6.3198% ( 230) 00:10:21.733 7685.585 - 7745.164: 8.2171% ( 272) 00:10:21.733 7745.164 - 7804.742: 10.2121% ( 286) 00:10:21.733 7804.742 - 7864.320: 12.3744% ( 310) 00:10:21.733 7864.320 - 7923.898: 14.4322% ( 295) 00:10:21.733 7923.898 - 7983.476: 16.6574% ( 319) 00:10:21.733 7983.476 - 8043.055: 18.8616% ( 316) 00:10:21.733 8043.055 - 8102.633: 21.1705% ( 331) 00:10:21.733 8102.633 - 8162.211: 23.5770% ( 345) 00:10:21.733 8162.211 - 8221.789: 26.0603% ( 356) 00:10:21.733 8221.789 - 8281.367: 28.6203% ( 367) 00:10:21.733 8281.367 - 8340.945: 31.3058% ( 385) 00:10:21.733 8340.945 - 8400.524: 33.9146% ( 374) 00:10:21.733 8400.524 - 8460.102: 36.6420% ( 391) 00:10:21.733 8460.102 - 8519.680: 39.3834% ( 393) 00:10:21.733 8519.680 - 8579.258: 42.1456% ( 396) 00:10:21.733 8579.258 - 8638.836: 44.9428% ( 401) 00:10:21.733 8638.836 - 8698.415: 47.7400% ( 401) 00:10:21.733 8698.415 - 8757.993: 50.5580% ( 404) 00:10:21.733 8757.993 - 8817.571: 53.4040% ( 408) 00:10:21.733 8817.571 - 8877.149: 56.2221% ( 404) 00:10:21.733 8877.149 - 8936.727: 59.1239% ( 416) 00:10:21.733 8936.727 - 8996.305: 61.9350% ( 403) 00:10:21.733 8996.305 - 9055.884: 64.7252% ( 400) 00:10:21.733 9055.884 - 9115.462: 67.5363% ( 403) 00:10:21.733 9115.462 - 9175.040: 70.2637% ( 391) 00:10:21.733 9175.040 - 9234.618: 72.9074% ( 379) 00:10:21.733 9234.618 - 9294.196: 75.4255% ( 361) 00:10:21.733 9294.196 - 9353.775: 77.8460% ( 347) 00:10:21.733 9353.775 - 9413.353: 80.0084% ( 310) 00:10:21.733 9413.353 - 9472.931: 81.9545% ( 279) 00:10:21.733 9472.931 - 9532.509: 83.7402% ( 256) 00:10:21.733 9532.509 - 9592.087: 85.3167% ( 226) 00:10:21.733 9592.087 - 9651.665: 86.6839% ( 196) 00:10:21.733 9651.665 - 9711.244: 87.8278% ( 164) 00:10:21.733 9711.244 - 9770.822: 88.8742% ( 150) 00:10:21.733 9770.822 - 9830.400: 89.7042% ( 119) 00:10:21.733 9830.400 - 9889.978: 90.5064% ( 115) 00:10:21.733 9889.978 - 9949.556: 91.1412% ( 91) 00:10:21.733 9949.556 - 10009.135: 91.7480% ( 87) 00:10:21.733 10009.135 - 10068.713: 92.3410% ( 85) 00:10:21.733 10068.713 - 10128.291: 92.9339% ( 85) 00:10:21.733 10128.291 - 10187.869: 93.5059% ( 82) 00:10:21.733 10187.869 - 10247.447: 94.0569% ( 79) 00:10:21.733 10247.447 - 10307.025: 94.6010% ( 78) 00:10:21.733 10307.025 - 10366.604: 95.1311% ( 76) 00:10:21.733 10366.604 - 10426.182: 95.5427% ( 59) 00:10:21.733 10426.182 - 10485.760: 95.8845% ( 49) 00:10:21.733 10485.760 - 10545.338: 96.1775% ( 42) 00:10:21.733 10545.338 - 10604.916: 96.4495% ( 39) 00:10:21.733 10604.916 - 10664.495: 96.6588% ( 30) 00:10:21.733 10664.495 - 10724.073: 96.8052% ( 21) 00:10:21.733 10724.073 - 10783.651: 96.9378% ( 19) 00:10:21.733 10783.651 - 10843.229: 97.0843% ( 21) 00:10:21.733 10843.229 - 10902.807: 97.2098% ( 18) 00:10:21.733 10902.807 - 10962.385: 97.3214% ( 16) 00:10:21.733 10962.385 - 11021.964: 97.4121% ( 13) 00:10:21.733 11021.964 - 11081.542: 97.5098% ( 14) 00:10:21.733 11081.542 - 11141.120: 97.6144% ( 15) 00:10:21.733 11141.120 - 11200.698: 97.6981% ( 12) 00:10:21.733 11200.698 - 11260.276: 97.7888% ( 13) 00:10:21.733 11260.276 - 11319.855: 97.8585% ( 10) 00:10:21.733 11319.855 - 11379.433: 97.9353% ( 11) 00:10:21.733 11379.433 - 11439.011: 97.9980% ( 9) 00:10:21.733 11439.011 - 11498.589: 98.0678% ( 10) 00:10:21.733 11498.589 - 11558.167: 98.1166% ( 7) 00:10:21.734 11558.167 - 11617.745: 98.1724% ( 8) 00:10:21.734 11617.745 - 11677.324: 98.2213% ( 7) 00:10:21.734 11677.324 - 11736.902: 98.2701% ( 7) 00:10:21.734 11736.902 - 11796.480: 98.3189% ( 7) 00:10:21.734 11796.480 - 11856.058: 98.3677% ( 7) 00:10:21.734 11856.058 - 11915.636: 98.4235% ( 8) 00:10:21.734 11915.636 - 11975.215: 98.4794% ( 8) 00:10:21.734 11975.215 - 12034.793: 98.5352% ( 8) 00:10:21.734 12034.793 - 12094.371: 98.5770% ( 6) 00:10:21.734 12094.371 - 12153.949: 98.6328% ( 8) 00:10:21.734 12153.949 - 12213.527: 98.6816% ( 7) 00:10:21.734 12213.527 - 12273.105: 98.7374% ( 8) 00:10:21.734 12273.105 - 12332.684: 98.7793% ( 6) 00:10:21.734 12332.684 - 12392.262: 98.8211% ( 6) 00:10:21.734 12392.262 - 12451.840: 98.8560% ( 5) 00:10:21.734 12451.840 - 12511.418: 98.8909% ( 5) 00:10:21.734 12511.418 - 12570.996: 98.9258% ( 5) 00:10:21.734 12570.996 - 12630.575: 98.9607% ( 5) 00:10:21.734 12630.575 - 12690.153: 98.9886% ( 4) 00:10:21.734 12690.153 - 12749.731: 99.0165% ( 4) 00:10:21.734 12749.731 - 12809.309: 99.0304% ( 2) 00:10:21.734 12809.309 - 12868.887: 99.0444% ( 2) 00:10:21.734 12868.887 - 12928.465: 99.0653% ( 3) 00:10:21.734 12928.465 - 12988.044: 99.0792% ( 2) 00:10:21.734 12988.044 - 13047.622: 99.0932% ( 2) 00:10:21.734 13047.622 - 13107.200: 99.1071% ( 2) 00:10:21.734 20494.895 - 20614.051: 99.1141% ( 1) 00:10:21.734 20614.051 - 20733.207: 99.1350% ( 3) 00:10:21.734 20733.207 - 20852.364: 99.1560% ( 3) 00:10:21.734 20852.364 - 20971.520: 99.1839% ( 4) 00:10:21.734 20971.520 - 21090.676: 99.2048% ( 3) 00:10:21.734 21090.676 - 21209.833: 99.2327% ( 4) 00:10:21.734 21209.833 - 21328.989: 99.2536% ( 3) 00:10:21.734 21328.989 - 21448.145: 99.2815% ( 4) 00:10:21.734 21448.145 - 21567.302: 99.3025% ( 3) 00:10:21.734 21567.302 - 21686.458: 99.3304% ( 4) 00:10:21.734 21686.458 - 21805.615: 99.3513% ( 3) 00:10:21.734 21805.615 - 21924.771: 99.3722% ( 3) 00:10:21.734 21924.771 - 22043.927: 99.3931% ( 3) 00:10:21.734 22043.927 - 22163.084: 99.4141% ( 3) 00:10:21.734 22163.084 - 22282.240: 99.4420% ( 4) 00:10:21.734 22282.240 - 22401.396: 99.4629% ( 3) 00:10:21.734 22401.396 - 22520.553: 99.4768% ( 2) 00:10:21.734 22520.553 - 22639.709: 99.5047% ( 4) 00:10:21.734 22639.709 - 22758.865: 99.5257% ( 3) 00:10:21.734 22758.865 - 22878.022: 99.5466% ( 3) 00:10:21.734 22878.022 - 22997.178: 99.5745% ( 4) 00:10:21.734 22997.178 - 23116.335: 99.5954% ( 3) 00:10:21.734 23116.335 - 23235.491: 99.6164% ( 3) 00:10:21.734 23235.491 - 23354.647: 99.6373% ( 3) 00:10:21.734 23354.647 - 23473.804: 99.6652% ( 4) 00:10:21.734 23473.804 - 23592.960: 99.6861% ( 3) 00:10:21.734 23592.960 - 23712.116: 99.7070% ( 3) 00:10:21.734 23712.116 - 23831.273: 99.7349% ( 4) 00:10:21.734 23831.273 - 23950.429: 99.7559% ( 3) 00:10:21.734 23950.429 - 24069.585: 99.7838% ( 4) 00:10:21.734 24069.585 - 24188.742: 99.8047% ( 3) 00:10:21.734 24188.742 - 24307.898: 99.8326% ( 4) 00:10:21.734 24307.898 - 24427.055: 99.8535% ( 3) 00:10:21.734 24427.055 - 24546.211: 99.8744% ( 3) 00:10:21.734 24546.211 - 24665.367: 99.9023% ( 4) 00:10:21.734 24665.367 - 24784.524: 99.9233% ( 3) 00:10:21.734 24784.524 - 24903.680: 99.9512% ( 4) 00:10:21.734 24903.680 - 25022.836: 99.9721% ( 3) 00:10:21.734 25022.836 - 25141.993: 100.0000% ( 4) 00:10:21.734 00:10:21.734 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:21.734 ============================================================================== 00:10:21.734 Range in us Cumulative IO count 00:10:21.734 6791.913 - 6821.702: 0.0070% ( 1) 00:10:21.734 6821.702 - 6851.491: 0.0140% ( 1) 00:10:21.734 6851.491 - 6881.280: 0.0488% ( 5) 00:10:21.734 6881.280 - 6911.069: 0.0767% ( 4) 00:10:21.734 6911.069 - 6940.858: 0.0907% ( 2) 00:10:21.734 6940.858 - 6970.647: 0.1186% ( 4) 00:10:21.734 6970.647 - 7000.436: 0.1465% ( 4) 00:10:21.734 7000.436 - 7030.225: 0.1744% ( 4) 00:10:21.734 7030.225 - 7060.015: 0.2023% ( 4) 00:10:21.734 7060.015 - 7089.804: 0.2162% ( 2) 00:10:21.734 7089.804 - 7119.593: 0.2441% ( 4) 00:10:21.734 7119.593 - 7149.382: 0.2651% ( 3) 00:10:21.734 7149.382 - 7179.171: 0.3139% ( 7) 00:10:21.734 7179.171 - 7208.960: 0.3906% ( 11) 00:10:21.734 7208.960 - 7238.749: 0.4395% ( 7) 00:10:21.734 7238.749 - 7268.538: 0.5092% ( 10) 00:10:21.734 7268.538 - 7298.327: 0.6069% ( 14) 00:10:21.734 7298.327 - 7328.116: 0.6975% ( 13) 00:10:21.734 7328.116 - 7357.905: 0.8510% ( 22) 00:10:21.734 7357.905 - 7387.695: 1.0254% ( 25) 00:10:21.734 7387.695 - 7417.484: 1.3253% ( 43) 00:10:21.734 7417.484 - 7447.273: 1.6183% ( 42) 00:10:21.734 7447.273 - 7477.062: 2.0089% ( 56) 00:10:21.734 7477.062 - 7506.851: 2.4484% ( 63) 00:10:21.734 7506.851 - 7536.640: 2.9646% ( 74) 00:10:21.734 7536.640 - 7566.429: 3.5575% ( 85) 00:10:21.734 7566.429 - 7596.218: 4.1992% ( 92) 00:10:21.734 7596.218 - 7626.007: 4.8689% ( 96) 00:10:21.734 7626.007 - 7685.585: 6.4802% ( 231) 00:10:21.734 7685.585 - 7745.164: 8.1473% ( 239) 00:10:21.734 7745.164 - 7804.742: 10.0935% ( 279) 00:10:21.734 7804.742 - 7864.320: 12.1094% ( 289) 00:10:21.734 7864.320 - 7923.898: 14.2299% ( 304) 00:10:21.734 7923.898 - 7983.476: 16.4481% ( 318) 00:10:21.734 7983.476 - 8043.055: 18.8337% ( 342) 00:10:21.734 8043.055 - 8102.633: 21.0728% ( 321) 00:10:21.734 8102.633 - 8162.211: 23.4235% ( 337) 00:10:21.734 8162.211 - 8221.789: 25.8719% ( 351) 00:10:21.734 8221.789 - 8281.367: 28.4459% ( 369) 00:10:21.734 8281.367 - 8340.945: 31.0965% ( 380) 00:10:21.734 8340.945 - 8400.524: 33.8518% ( 395) 00:10:21.734 8400.524 - 8460.102: 36.5583% ( 388) 00:10:21.734 8460.102 - 8519.680: 39.2997% ( 393) 00:10:21.734 8519.680 - 8579.258: 42.1387% ( 407) 00:10:21.734 8579.258 - 8638.836: 44.9358% ( 401) 00:10:21.734 8638.836 - 8698.415: 47.8097% ( 412) 00:10:21.734 8698.415 - 8757.993: 50.6696% ( 410) 00:10:21.734 8757.993 - 8817.571: 53.5435% ( 412) 00:10:21.734 8817.571 - 8877.149: 56.3686% ( 405) 00:10:21.734 8877.149 - 8936.727: 59.2494% ( 413) 00:10:21.734 8936.727 - 8996.305: 62.1861% ( 421) 00:10:21.734 8996.305 - 9055.884: 64.9484% ( 396) 00:10:21.734 9055.884 - 9115.462: 67.7455% ( 401) 00:10:21.734 9115.462 - 9175.040: 70.5706% ( 405) 00:10:21.734 9175.040 - 9234.618: 73.0539% ( 356) 00:10:21.734 9234.618 - 9294.196: 75.5580% ( 359) 00:10:21.734 9294.196 - 9353.775: 77.9018% ( 336) 00:10:21.734 9353.775 - 9413.353: 80.0572% ( 309) 00:10:21.734 9413.353 - 9472.931: 82.1359% ( 298) 00:10:21.734 9472.931 - 9532.509: 83.9355% ( 258) 00:10:21.734 9532.509 - 9592.087: 85.5887% ( 237) 00:10:21.734 9592.087 - 9651.665: 86.9629% ( 197) 00:10:21.734 9651.665 - 9711.244: 88.1487% ( 170) 00:10:21.734 9711.244 - 9770.822: 89.1183% ( 139) 00:10:21.734 9770.822 - 9830.400: 89.9414% ( 118) 00:10:21.734 9830.400 - 9889.978: 90.6878% ( 107) 00:10:21.734 9889.978 - 9949.556: 91.3086% ( 89) 00:10:21.734 9949.556 - 10009.135: 91.9224% ( 88) 00:10:21.734 10009.135 - 10068.713: 92.4665% ( 78) 00:10:21.734 10068.713 - 10128.291: 92.9967% ( 76) 00:10:21.734 10128.291 - 10187.869: 93.5268% ( 76) 00:10:21.734 10187.869 - 10247.447: 94.0499% ( 75) 00:10:21.734 10247.447 - 10307.025: 94.5522% ( 72) 00:10:21.734 10307.025 - 10366.604: 95.0265% ( 68) 00:10:21.734 10366.604 - 10426.182: 95.4171% ( 56) 00:10:21.734 10426.182 - 10485.760: 95.7171% ( 43) 00:10:21.734 10485.760 - 10545.338: 96.0170% ( 43) 00:10:21.734 10545.338 - 10604.916: 96.2123% ( 28) 00:10:21.734 10604.916 - 10664.495: 96.3728% ( 23) 00:10:21.734 10664.495 - 10724.073: 96.5332% ( 23) 00:10:21.734 10724.073 - 10783.651: 96.6588% ( 18) 00:10:21.734 10783.651 - 10843.229: 96.7634% ( 15) 00:10:21.734 10843.229 - 10902.807: 96.8680% ( 15) 00:10:21.734 10902.807 - 10962.385: 96.9796% ( 16) 00:10:21.734 10962.385 - 11021.964: 97.0912% ( 16) 00:10:21.734 11021.964 - 11081.542: 97.1889% ( 14) 00:10:21.734 11081.542 - 11141.120: 97.2866% ( 14) 00:10:21.734 11141.120 - 11200.698: 97.3772% ( 13) 00:10:21.734 11200.698 - 11260.276: 97.4400% ( 9) 00:10:21.734 11260.276 - 11319.855: 97.5167% ( 11) 00:10:21.734 11319.855 - 11379.433: 97.5795% ( 9) 00:10:21.734 11379.433 - 11439.011: 97.6632% ( 12) 00:10:21.735 11439.011 - 11498.589: 97.7330% ( 10) 00:10:21.735 11498.589 - 11558.167: 97.8027% ( 10) 00:10:21.735 11558.167 - 11617.745: 97.8725% ( 10) 00:10:21.735 11617.745 - 11677.324: 97.9422% ( 10) 00:10:21.735 11677.324 - 11736.902: 97.9980% ( 8) 00:10:21.735 11736.902 - 11796.480: 98.0748% ( 11) 00:10:21.735 11796.480 - 11856.058: 98.1166% ( 6) 00:10:21.735 11856.058 - 11915.636: 98.1515% ( 5) 00:10:21.735 11915.636 - 11975.215: 98.1934% ( 6) 00:10:21.735 11975.215 - 12034.793: 98.2282% ( 5) 00:10:21.735 12034.793 - 12094.371: 98.2701% ( 6) 00:10:21.735 12094.371 - 12153.949: 98.3050% ( 5) 00:10:21.735 12153.949 - 12213.527: 98.3398% ( 5) 00:10:21.735 12213.527 - 12273.105: 98.3747% ( 5) 00:10:21.735 12273.105 - 12332.684: 98.4166% ( 6) 00:10:21.735 12332.684 - 12392.262: 98.4515% ( 5) 00:10:21.735 12392.262 - 12451.840: 98.4863% ( 5) 00:10:21.735 12451.840 - 12511.418: 98.5212% ( 5) 00:10:21.735 12511.418 - 12570.996: 98.5631% ( 6) 00:10:21.735 12570.996 - 12630.575: 98.5979% ( 5) 00:10:21.735 12630.575 - 12690.153: 98.6258% ( 4) 00:10:21.735 12690.153 - 12749.731: 98.6677% ( 6) 00:10:21.735 12749.731 - 12809.309: 98.7026% ( 5) 00:10:21.735 12809.309 - 12868.887: 98.7374% ( 5) 00:10:21.735 12868.887 - 12928.465: 98.7723% ( 5) 00:10:21.735 12928.465 - 12988.044: 98.8142% ( 6) 00:10:21.735 12988.044 - 13047.622: 98.8421% ( 4) 00:10:21.735 13047.622 - 13107.200: 98.8770% ( 5) 00:10:21.735 13107.200 - 13166.778: 98.9118% ( 5) 00:10:21.735 13166.778 - 13226.356: 98.9537% ( 6) 00:10:21.735 13226.356 - 13285.935: 98.9816% ( 4) 00:10:21.735 13285.935 - 13345.513: 98.9955% ( 2) 00:10:21.735 13345.513 - 13405.091: 99.0165% ( 3) 00:10:21.735 13405.091 - 13464.669: 99.0304% ( 2) 00:10:21.735 13464.669 - 13524.247: 99.0513% ( 3) 00:10:21.735 13524.247 - 13583.825: 99.0653% ( 2) 00:10:21.735 13583.825 - 13643.404: 99.0862% ( 3) 00:10:21.735 13643.404 - 13702.982: 99.0932% ( 1) 00:10:21.735 13702.982 - 13762.560: 99.1071% ( 2) 00:10:21.735 18350.080 - 18469.236: 99.1211% ( 2) 00:10:21.735 18469.236 - 18588.393: 99.1420% ( 3) 00:10:21.735 18588.393 - 18707.549: 99.1629% ( 3) 00:10:21.735 18707.549 - 18826.705: 99.1839% ( 3) 00:10:21.735 18826.705 - 18945.862: 99.2048% ( 3) 00:10:21.735 18945.862 - 19065.018: 99.2327% ( 4) 00:10:21.735 19065.018 - 19184.175: 99.2536% ( 3) 00:10:21.735 19184.175 - 19303.331: 99.2746% ( 3) 00:10:21.735 19303.331 - 19422.487: 99.3025% ( 4) 00:10:21.735 19422.487 - 19541.644: 99.3164% ( 2) 00:10:21.735 19541.644 - 19660.800: 99.3443% ( 4) 00:10:21.735 19660.800 - 19779.956: 99.3583% ( 2) 00:10:21.735 19779.956 - 19899.113: 99.3862% ( 4) 00:10:21.735 19899.113 - 20018.269: 99.4071% ( 3) 00:10:21.735 20018.269 - 20137.425: 99.4280% ( 3) 00:10:21.735 20137.425 - 20256.582: 99.4489% ( 3) 00:10:21.735 20256.582 - 20375.738: 99.4699% ( 3) 00:10:21.735 20375.738 - 20494.895: 99.4978% ( 4) 00:10:21.735 20494.895 - 20614.051: 99.5187% ( 3) 00:10:21.735 20614.051 - 20733.207: 99.5396% ( 3) 00:10:21.735 20733.207 - 20852.364: 99.5675% ( 4) 00:10:21.735 20852.364 - 20971.520: 99.5884% ( 3) 00:10:21.735 20971.520 - 21090.676: 99.6164% ( 4) 00:10:21.735 21090.676 - 21209.833: 99.6373% ( 3) 00:10:21.735 21209.833 - 21328.989: 99.6652% ( 4) 00:10:21.735 21328.989 - 21448.145: 99.6861% ( 3) 00:10:21.735 21448.145 - 21567.302: 99.7070% ( 3) 00:10:21.735 21567.302 - 21686.458: 99.7280% ( 3) 00:10:21.735 21686.458 - 21805.615: 99.7489% ( 3) 00:10:21.735 21805.615 - 21924.771: 99.7768% ( 4) 00:10:21.735 21924.771 - 22043.927: 99.7977% ( 3) 00:10:21.735 22043.927 - 22163.084: 99.8186% ( 3) 00:10:21.735 22163.084 - 22282.240: 99.8396% ( 3) 00:10:21.735 22282.240 - 22401.396: 99.8675% ( 4) 00:10:21.735 22401.396 - 22520.553: 99.8884% ( 3) 00:10:21.735 22520.553 - 22639.709: 99.9093% ( 3) 00:10:21.735 22639.709 - 22758.865: 99.9372% ( 4) 00:10:21.735 22758.865 - 22878.022: 99.9581% ( 3) 00:10:21.735 22878.022 - 22997.178: 99.9791% ( 3) 00:10:21.735 22997.178 - 23116.335: 100.0000% ( 3) 00:10:21.735 00:10:21.735 21:01:35 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:23.111 Initializing NVMe Controllers 00:10:23.111 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:23.111 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:23.111 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:23.111 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:23.111 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:23.111 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:23.111 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:23.111 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:23.111 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:23.111 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:23.111 Initialization complete. Launching workers. 00:10:23.111 ======================================================== 00:10:23.111 Latency(us) 00:10:23.111 Device Information : IOPS MiB/s Average min max 00:10:23.111 PCIE (0000:00:06.0) NSID 1 from core 0: 10953.68 128.36 11678.03 8985.50 33646.15 00:10:23.111 PCIE (0000:00:07.0) NSID 1 from core 0: 10953.68 128.36 11664.04 9273.67 31923.80 00:10:23.111 PCIE (0000:00:09.0) NSID 1 from core 0: 10953.68 128.36 11648.36 9164.30 31167.90 00:10:23.111 PCIE (0000:00:08.0) NSID 1 from core 0: 10953.68 128.36 11632.58 9386.63 29446.83 00:10:23.111 PCIE (0000:00:08.0) NSID 2 from core 0: 10953.68 128.36 11616.31 9424.77 27907.68 00:10:23.111 PCIE (0000:00:08.0) NSID 3 from core 0: 10953.68 128.36 11600.28 9302.93 26125.98 00:10:23.111 ======================================================== 00:10:23.111 Total : 65722.08 770.18 11639.93 8985.50 33646.15 00:10:23.111 00:10:23.111 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:23.111 ================================================================================= 00:10:23.111 1.00000% : 9532.509us 00:10:23.111 10.00000% : 10187.869us 00:10:23.111 25.00000% : 10724.073us 00:10:23.111 50.00000% : 11498.589us 00:10:23.111 75.00000% : 12273.105us 00:10:23.111 90.00000% : 12809.309us 00:10:23.111 95.00000% : 13166.778us 00:10:23.111 98.00000% : 13702.982us 00:10:23.111 99.00000% : 28478.371us 00:10:23.111 99.50000% : 31218.967us 00:10:23.111 99.90000% : 33363.782us 00:10:23.111 99.99000% : 33602.095us 00:10:23.111 99.99900% : 33840.407us 00:10:23.111 99.99990% : 33840.407us 00:10:23.111 99.99999% : 33840.407us 00:10:23.111 00:10:23.111 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:23.111 ================================================================================= 00:10:23.111 1.00000% : 9651.665us 00:10:23.111 10.00000% : 10366.604us 00:10:23.111 25.00000% : 10843.229us 00:10:23.111 50.00000% : 11498.589us 00:10:23.111 75.00000% : 12153.949us 00:10:23.111 90.00000% : 12690.153us 00:10:23.111 95.00000% : 13047.622us 00:10:23.111 98.00000% : 13643.404us 00:10:23.111 99.00000% : 27763.433us 00:10:23.111 99.50000% : 30027.404us 00:10:23.111 99.90000% : 31695.593us 00:10:23.111 99.99000% : 31933.905us 00:10:23.111 99.99900% : 31933.905us 00:10:23.111 99.99990% : 31933.905us 00:10:23.111 99.99999% : 31933.905us 00:10:23.111 00:10:23.111 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:23.111 ================================================================================= 00:10:23.111 1.00000% : 9651.665us 00:10:23.111 10.00000% : 10307.025us 00:10:23.111 25.00000% : 10843.229us 00:10:23.111 50.00000% : 11498.589us 00:10:23.111 75.00000% : 12153.949us 00:10:23.111 90.00000% : 12690.153us 00:10:23.111 95.00000% : 12988.044us 00:10:23.111 98.00000% : 13524.247us 00:10:23.111 99.00000% : 27167.651us 00:10:23.111 99.50000% : 29193.309us 00:10:23.111 99.90000% : 30980.655us 00:10:23.111 99.99000% : 31218.967us 00:10:23.111 99.99900% : 31218.967us 00:10:23.111 99.99990% : 31218.967us 00:10:23.111 99.99999% : 31218.967us 00:10:23.111 00:10:23.111 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:23.111 ================================================================================= 00:10:23.111 1.00000% : 9711.244us 00:10:23.111 10.00000% : 10366.604us 00:10:23.111 25.00000% : 10843.229us 00:10:23.111 50.00000% : 11498.589us 00:10:23.111 75.00000% : 12153.949us 00:10:23.111 90.00000% : 12630.575us 00:10:23.111 95.00000% : 12988.044us 00:10:23.111 98.00000% : 13524.247us 00:10:23.111 99.00000% : 25499.462us 00:10:23.111 99.50000% : 27525.120us 00:10:23.111 99.90000% : 29074.153us 00:10:23.111 99.99000% : 29431.622us 00:10:23.111 99.99900% : 29550.778us 00:10:23.111 99.99990% : 29550.778us 00:10:23.111 99.99999% : 29550.778us 00:10:23.111 00:10:23.111 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:23.111 ================================================================================= 00:10:23.111 1.00000% : 9770.822us 00:10:23.111 10.00000% : 10366.604us 00:10:23.111 25.00000% : 10843.229us 00:10:23.111 50.00000% : 11498.589us 00:10:23.111 75.00000% : 12153.949us 00:10:23.111 90.00000% : 12630.575us 00:10:23.111 95.00000% : 12988.044us 00:10:23.111 98.00000% : 13583.825us 00:10:23.111 99.00000% : 24188.742us 00:10:23.111 99.50000% : 25856.931us 00:10:23.111 99.90000% : 27644.276us 00:10:23.111 99.99000% : 27882.589us 00:10:23.111 99.99900% : 28001.745us 00:10:23.111 99.99990% : 28001.745us 00:10:23.111 99.99999% : 28001.745us 00:10:23.111 00:10:23.111 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:23.111 ================================================================================= 00:10:23.111 1.00000% : 9770.822us 00:10:23.111 10.00000% : 10366.604us 00:10:23.111 25.00000% : 10843.229us 00:10:23.111 50.00000% : 11498.589us 00:10:23.111 75.00000% : 12153.949us 00:10:23.111 90.00000% : 12630.575us 00:10:23.111 95.00000% : 12988.044us 00:10:23.111 98.00000% : 13464.669us 00:10:23.111 99.00000% : 22163.084us 00:10:23.111 99.50000% : 24188.742us 00:10:23.111 99.90000% : 25856.931us 00:10:23.111 99.99000% : 26095.244us 00:10:23.112 99.99900% : 26214.400us 00:10:23.112 99.99990% : 26214.400us 00:10:23.112 99.99999% : 26214.400us 00:10:23.112 00:10:23.112 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:23.112 ============================================================================== 00:10:23.112 Range in us Cumulative IO count 00:10:23.112 8936.727 - 8996.305: 0.0091% ( 1) 00:10:23.112 8996.305 - 9055.884: 0.0363% ( 3) 00:10:23.112 9055.884 - 9115.462: 0.0999% ( 7) 00:10:23.112 9115.462 - 9175.040: 0.1999% ( 11) 00:10:23.112 9175.040 - 9234.618: 0.3089% ( 12) 00:10:23.112 9234.618 - 9294.196: 0.4179% ( 12) 00:10:23.112 9294.196 - 9353.775: 0.5269% ( 12) 00:10:23.112 9353.775 - 9413.353: 0.6722% ( 16) 00:10:23.112 9413.353 - 9472.931: 0.8721% ( 22) 00:10:23.112 9472.931 - 9532.509: 1.1719% ( 33) 00:10:23.112 9532.509 - 9592.087: 1.4989% ( 36) 00:10:23.112 9592.087 - 9651.665: 1.9077% ( 45) 00:10:23.112 9651.665 - 9711.244: 2.3165% ( 45) 00:10:23.112 9711.244 - 9770.822: 2.9706% ( 72) 00:10:23.112 9770.822 - 9830.400: 3.6519% ( 75) 00:10:23.112 9830.400 - 9889.978: 4.4695% ( 90) 00:10:23.112 9889.978 - 9949.556: 5.3234% ( 94) 00:10:23.112 9949.556 - 10009.135: 6.4771% ( 127) 00:10:23.112 10009.135 - 10068.713: 7.8034% ( 146) 00:10:23.112 10068.713 - 10128.291: 9.1842% ( 152) 00:10:23.112 10128.291 - 10187.869: 10.4015% ( 134) 00:10:23.112 10187.869 - 10247.447: 11.9095% ( 166) 00:10:23.112 10247.447 - 10307.025: 13.3903% ( 163) 00:10:23.112 10307.025 - 10366.604: 15.0799% ( 186) 00:10:23.112 10366.604 - 10426.182: 16.7969% ( 189) 00:10:23.112 10426.182 - 10485.760: 18.3866% ( 175) 00:10:23.112 10485.760 - 10545.338: 20.0945% ( 188) 00:10:23.112 10545.338 - 10604.916: 22.0113% ( 211) 00:10:23.112 10604.916 - 10664.495: 23.7736% ( 194) 00:10:23.112 10664.495 - 10724.073: 25.6359% ( 205) 00:10:23.112 10724.073 - 10783.651: 27.4437% ( 199) 00:10:23.112 10783.651 - 10843.229: 29.4422% ( 220) 00:10:23.112 10843.229 - 10902.807: 31.3863% ( 214) 00:10:23.112 10902.807 - 10962.385: 33.2667% ( 207) 00:10:23.112 10962.385 - 11021.964: 35.2562% ( 219) 00:10:23.112 11021.964 - 11081.542: 37.2820% ( 223) 00:10:23.112 11081.542 - 11141.120: 39.0716% ( 197) 00:10:23.112 11141.120 - 11200.698: 41.0338% ( 216) 00:10:23.112 11200.698 - 11260.276: 43.0778% ( 225) 00:10:23.112 11260.276 - 11319.855: 45.0127% ( 213) 00:10:23.112 11319.855 - 11379.433: 47.1384% ( 234) 00:10:23.112 11379.433 - 11439.011: 49.0643% ( 212) 00:10:23.112 11439.011 - 11498.589: 51.0629% ( 220) 00:10:23.112 11498.589 - 11558.167: 53.0160% ( 215) 00:10:23.112 11558.167 - 11617.745: 55.0963% ( 229) 00:10:23.112 11617.745 - 11677.324: 57.1312% ( 224) 00:10:23.112 11677.324 - 11736.902: 59.0752% ( 214) 00:10:23.112 11736.902 - 11796.480: 61.1192% ( 225) 00:10:23.112 11796.480 - 11856.058: 62.9996% ( 207) 00:10:23.112 11856.058 - 11915.636: 64.9800% ( 218) 00:10:23.112 11915.636 - 11975.215: 67.0967% ( 233) 00:10:23.112 11975.215 - 12034.793: 68.9771% ( 207) 00:10:23.112 12034.793 - 12094.371: 70.9211% ( 214) 00:10:23.112 12094.371 - 12153.949: 72.8924% ( 217) 00:10:23.112 12153.949 - 12213.527: 74.7911% ( 209) 00:10:23.112 12213.527 - 12273.105: 76.6170% ( 201) 00:10:23.112 12273.105 - 12332.684: 78.4430% ( 201) 00:10:23.112 12332.684 - 12392.262: 80.3234% ( 207) 00:10:23.112 12392.262 - 12451.840: 82.0676% ( 192) 00:10:23.112 12451.840 - 12511.418: 83.7482% ( 185) 00:10:23.112 12511.418 - 12570.996: 85.3198% ( 173) 00:10:23.112 12570.996 - 12630.575: 86.8550% ( 169) 00:10:23.112 12630.575 - 12690.153: 88.0632% ( 133) 00:10:23.112 12690.153 - 12749.731: 89.3623% ( 143) 00:10:23.112 12749.731 - 12809.309: 90.4706% ( 122) 00:10:23.112 12809.309 - 12868.887: 91.5698% ( 121) 00:10:23.112 12868.887 - 12928.465: 92.4237% ( 94) 00:10:23.112 12928.465 - 12988.044: 93.3230% ( 99) 00:10:23.112 12988.044 - 13047.622: 94.1497% ( 91) 00:10:23.112 13047.622 - 13107.200: 94.8129% ( 73) 00:10:23.112 13107.200 - 13166.778: 95.3670% ( 61) 00:10:23.112 13166.778 - 13226.356: 95.8939% ( 58) 00:10:23.112 13226.356 - 13285.935: 96.3118% ( 46) 00:10:23.112 13285.935 - 13345.513: 96.6751% ( 40) 00:10:23.112 13345.513 - 13405.091: 97.0567% ( 42) 00:10:23.112 13405.091 - 13464.669: 97.3110% ( 28) 00:10:23.112 13464.669 - 13524.247: 97.5654% ( 28) 00:10:23.112 13524.247 - 13583.825: 97.8289% ( 29) 00:10:23.112 13583.825 - 13643.404: 97.9742% ( 16) 00:10:23.112 13643.404 - 13702.982: 98.1105% ( 15) 00:10:23.112 13702.982 - 13762.560: 98.2376% ( 14) 00:10:23.112 13762.560 - 13822.138: 98.3467% ( 12) 00:10:23.112 13822.138 - 13881.716: 98.4284% ( 9) 00:10:23.112 13881.716 - 13941.295: 98.4829% ( 6) 00:10:23.112 13941.295 - 14000.873: 98.5647% ( 9) 00:10:23.112 14000.873 - 14060.451: 98.6101% ( 5) 00:10:23.112 14060.451 - 14120.029: 98.6737% ( 7) 00:10:23.112 14120.029 - 14179.607: 98.7282% ( 6) 00:10:23.112 14179.607 - 14239.185: 98.7645% ( 4) 00:10:23.112 14239.185 - 14298.764: 98.7918% ( 3) 00:10:23.112 14298.764 - 14358.342: 98.8281% ( 4) 00:10:23.112 14358.342 - 14417.920: 98.8372% ( 1) 00:10:23.112 27763.433 - 27882.589: 98.8463% ( 1) 00:10:23.112 27882.589 - 28001.745: 98.8826% ( 4) 00:10:23.112 28001.745 - 28120.902: 98.8917% ( 1) 00:10:23.112 28120.902 - 28240.058: 98.9281% ( 4) 00:10:23.112 28240.058 - 28359.215: 98.9735% ( 5) 00:10:23.112 28359.215 - 28478.371: 99.0280% ( 6) 00:10:23.112 28478.371 - 28597.527: 99.0552% ( 3) 00:10:23.112 28597.527 - 28716.684: 99.0734% ( 2) 00:10:23.112 28716.684 - 28835.840: 99.1007% ( 3) 00:10:23.112 28835.840 - 28954.996: 99.1188% ( 2) 00:10:23.112 29074.153 - 29193.309: 99.1370% ( 2) 00:10:23.112 29193.309 - 29312.465: 99.1642% ( 3) 00:10:23.112 29312.465 - 29431.622: 99.1824% ( 2) 00:10:23.112 29431.622 - 29550.778: 99.1915% ( 1) 00:10:23.112 29550.778 - 29669.935: 99.2188% ( 3) 00:10:23.112 29669.935 - 29789.091: 99.2460% ( 3) 00:10:23.112 29789.091 - 29908.247: 99.2642% ( 2) 00:10:23.112 29908.247 - 30027.404: 99.2823% ( 2) 00:10:23.112 30027.404 - 30146.560: 99.3005% ( 2) 00:10:23.112 30146.560 - 30265.716: 99.3187% ( 2) 00:10:23.112 30265.716 - 30384.873: 99.3459% ( 3) 00:10:23.112 30384.873 - 30504.029: 99.3641% ( 2) 00:10:23.112 30504.029 - 30742.342: 99.4095% ( 5) 00:10:23.112 30742.342 - 30980.655: 99.4640% ( 6) 00:10:23.112 30980.655 - 31218.967: 99.5094% ( 5) 00:10:23.112 31218.967 - 31457.280: 99.5640% ( 6) 00:10:23.112 31457.280 - 31695.593: 99.6094% ( 5) 00:10:23.112 31695.593 - 31933.905: 99.6548% ( 5) 00:10:23.112 31933.905 - 32172.218: 99.7093% ( 6) 00:10:23.112 32172.218 - 32410.531: 99.7638% ( 6) 00:10:23.112 32410.531 - 32648.844: 99.8001% ( 4) 00:10:23.112 32648.844 - 32887.156: 99.8547% ( 6) 00:10:23.112 32887.156 - 33125.469: 99.8910% ( 4) 00:10:23.112 33125.469 - 33363.782: 99.9546% ( 7) 00:10:23.112 33363.782 - 33602.095: 99.9909% ( 4) 00:10:23.112 33602.095 - 33840.407: 100.0000% ( 1) 00:10:23.112 00:10:23.112 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:23.112 ============================================================================== 00:10:23.112 Range in us Cumulative IO count 00:10:23.112 9234.618 - 9294.196: 0.0545% ( 6) 00:10:23.112 9294.196 - 9353.775: 0.1726% ( 13) 00:10:23.112 9353.775 - 9413.353: 0.2816% ( 12) 00:10:23.112 9413.353 - 9472.931: 0.4270% ( 16) 00:10:23.112 9472.931 - 9532.509: 0.5723% ( 16) 00:10:23.112 9532.509 - 9592.087: 0.7722% ( 22) 00:10:23.112 9592.087 - 9651.665: 1.0356% ( 29) 00:10:23.112 9651.665 - 9711.244: 1.3717% ( 37) 00:10:23.112 9711.244 - 9770.822: 1.7442% ( 41) 00:10:23.112 9770.822 - 9830.400: 2.0621% ( 35) 00:10:23.112 9830.400 - 9889.978: 2.4982% ( 48) 00:10:23.112 9889.978 - 9949.556: 3.2431% ( 82) 00:10:23.112 9949.556 - 10009.135: 3.9698% ( 80) 00:10:23.112 10009.135 - 10068.713: 4.9419% ( 107) 00:10:23.112 10068.713 - 10128.291: 5.9684% ( 113) 00:10:23.112 10128.291 - 10187.869: 7.0676% ( 121) 00:10:23.112 10187.869 - 10247.447: 8.2031% ( 125) 00:10:23.112 10247.447 - 10307.025: 9.3932% ( 131) 00:10:23.112 10307.025 - 10366.604: 10.8648% ( 162) 00:10:23.112 10366.604 - 10426.182: 12.3910% ( 168) 00:10:23.112 10426.182 - 10485.760: 14.0898% ( 187) 00:10:23.112 10485.760 - 10545.338: 15.9520% ( 205) 00:10:23.112 10545.338 - 10604.916: 17.9960% ( 225) 00:10:23.112 10604.916 - 10664.495: 20.3398% ( 258) 00:10:23.112 10664.495 - 10724.073: 22.7108% ( 261) 00:10:23.112 10724.073 - 10783.651: 24.8001% ( 230) 00:10:23.112 10783.651 - 10843.229: 26.9077% ( 232) 00:10:23.112 10843.229 - 10902.807: 29.0334% ( 234) 00:10:23.112 10902.807 - 10962.385: 31.1228% ( 230) 00:10:23.112 10962.385 - 11021.964: 33.4302% ( 254) 00:10:23.112 11021.964 - 11081.542: 35.6922% ( 249) 00:10:23.112 11081.542 - 11141.120: 38.0087% ( 255) 00:10:23.112 11141.120 - 11200.698: 40.3979% ( 263) 00:10:23.112 11200.698 - 11260.276: 42.7780% ( 262) 00:10:23.112 11260.276 - 11319.855: 45.0309% ( 248) 00:10:23.112 11319.855 - 11379.433: 47.3474% ( 255) 00:10:23.112 11379.433 - 11439.011: 49.6639% ( 255) 00:10:23.112 11439.011 - 11498.589: 52.0985% ( 268) 00:10:23.112 11498.589 - 11558.167: 54.3877% ( 252) 00:10:23.112 11558.167 - 11617.745: 56.6134% ( 245) 00:10:23.112 11617.745 - 11677.324: 59.0025% ( 263) 00:10:23.112 11677.324 - 11736.902: 61.2373% ( 246) 00:10:23.112 11736.902 - 11796.480: 63.5174% ( 251) 00:10:23.112 11796.480 - 11856.058: 65.8430% ( 256) 00:10:23.112 11856.058 - 11915.636: 68.0596% ( 244) 00:10:23.112 11915.636 - 11975.215: 70.3307% ( 250) 00:10:23.112 11975.215 - 12034.793: 72.4655% ( 235) 00:10:23.112 12034.793 - 12094.371: 74.5640% ( 231) 00:10:23.112 12094.371 - 12153.949: 76.6533% ( 230) 00:10:23.112 12153.949 - 12213.527: 78.7064% ( 226) 00:10:23.112 12213.527 - 12273.105: 80.6504% ( 214) 00:10:23.112 12273.105 - 12332.684: 82.5218% ( 206) 00:10:23.112 12332.684 - 12392.262: 84.1842% ( 183) 00:10:23.112 12392.262 - 12451.840: 85.7649% ( 174) 00:10:23.112 12451.840 - 12511.418: 87.1820% ( 156) 00:10:23.112 12511.418 - 12570.996: 88.4993% ( 145) 00:10:23.112 12570.996 - 12630.575: 89.6711% ( 129) 00:10:23.112 12630.575 - 12690.153: 90.7794% ( 122) 00:10:23.112 12690.153 - 12749.731: 91.7515% ( 107) 00:10:23.112 12749.731 - 12809.309: 92.6781% ( 102) 00:10:23.112 12809.309 - 12868.887: 93.4230% ( 82) 00:10:23.112 12868.887 - 12928.465: 94.0316% ( 67) 00:10:23.112 12928.465 - 12988.044: 94.6130% ( 64) 00:10:23.112 12988.044 - 13047.622: 95.0672% ( 50) 00:10:23.112 13047.622 - 13107.200: 95.4942% ( 47) 00:10:23.112 13107.200 - 13166.778: 95.9030% ( 45) 00:10:23.112 13166.778 - 13226.356: 96.2936% ( 43) 00:10:23.112 13226.356 - 13285.935: 96.6388% ( 38) 00:10:23.112 13285.935 - 13345.513: 96.9931% ( 39) 00:10:23.112 13345.513 - 13405.091: 97.2565% ( 29) 00:10:23.112 13405.091 - 13464.669: 97.4746% ( 24) 00:10:23.112 13464.669 - 13524.247: 97.6835% ( 23) 00:10:23.112 13524.247 - 13583.825: 97.8924% ( 23) 00:10:23.112 13583.825 - 13643.404: 98.0378% ( 16) 00:10:23.112 13643.404 - 13702.982: 98.1741% ( 15) 00:10:23.112 13702.982 - 13762.560: 98.2740% ( 11) 00:10:23.112 13762.560 - 13822.138: 98.3648% ( 10) 00:10:23.112 13822.138 - 13881.716: 98.4284% ( 7) 00:10:23.112 13881.716 - 13941.295: 98.5102% ( 9) 00:10:23.112 13941.295 - 14000.873: 98.5828% ( 8) 00:10:23.112 14000.873 - 14060.451: 98.6283% ( 5) 00:10:23.112 14060.451 - 14120.029: 98.6555% ( 3) 00:10:23.112 14120.029 - 14179.607: 98.6919% ( 4) 00:10:23.112 14179.607 - 14239.185: 98.7191% ( 3) 00:10:23.112 14239.185 - 14298.764: 98.7555% ( 4) 00:10:23.112 14298.764 - 14358.342: 98.7827% ( 3) 00:10:23.112 14358.342 - 14417.920: 98.8190% ( 4) 00:10:23.112 14417.920 - 14477.498: 98.8372% ( 2) 00:10:23.112 26929.338 - 27048.495: 98.8645% ( 3) 00:10:23.112 27048.495 - 27167.651: 98.8917% ( 3) 00:10:23.112 27167.651 - 27286.807: 98.9099% ( 2) 00:10:23.112 27286.807 - 27405.964: 98.9371% ( 3) 00:10:23.112 27405.964 - 27525.120: 98.9553% ( 2) 00:10:23.112 27525.120 - 27644.276: 98.9826% ( 3) 00:10:23.112 27644.276 - 27763.433: 99.0098% ( 3) 00:10:23.112 27763.433 - 27882.589: 99.0280% ( 2) 00:10:23.112 27882.589 - 28001.745: 99.0461% ( 2) 00:10:23.112 28001.745 - 28120.902: 99.0643% ( 2) 00:10:23.112 28120.902 - 28240.058: 99.0916% ( 3) 00:10:23.112 28240.058 - 28359.215: 99.1188% ( 3) 00:10:23.112 28359.215 - 28478.371: 99.1370% ( 2) 00:10:23.112 28478.371 - 28597.527: 99.1642% ( 3) 00:10:23.112 28597.527 - 28716.684: 99.1915% ( 3) 00:10:23.112 28716.684 - 28835.840: 99.2188% ( 3) 00:10:23.112 28835.840 - 28954.996: 99.2460% ( 3) 00:10:23.112 28954.996 - 29074.153: 99.2733% ( 3) 00:10:23.113 29074.153 - 29193.309: 99.3096% ( 4) 00:10:23.113 29193.309 - 29312.465: 99.3368% ( 3) 00:10:23.113 29312.465 - 29431.622: 99.3641% ( 3) 00:10:23.113 29431.622 - 29550.778: 99.3914% ( 3) 00:10:23.113 29550.778 - 29669.935: 99.4277% ( 4) 00:10:23.113 29669.935 - 29789.091: 99.4549% ( 3) 00:10:23.113 29789.091 - 29908.247: 99.4913% ( 4) 00:10:23.113 29908.247 - 30027.404: 99.5094% ( 2) 00:10:23.113 30027.404 - 30146.560: 99.5458% ( 4) 00:10:23.113 30146.560 - 30265.716: 99.5730% ( 3) 00:10:23.113 30265.716 - 30384.873: 99.6094% ( 4) 00:10:23.113 30384.873 - 30504.029: 99.6366% ( 3) 00:10:23.113 30504.029 - 30742.342: 99.7002% ( 7) 00:10:23.113 30742.342 - 30980.655: 99.7547% ( 6) 00:10:23.113 30980.655 - 31218.967: 99.8183% ( 7) 00:10:23.113 31218.967 - 31457.280: 99.8728% ( 6) 00:10:23.113 31457.280 - 31695.593: 99.9364% ( 7) 00:10:23.113 31695.593 - 31933.905: 100.0000% ( 7) 00:10:23.113 00:10:23.113 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:23.113 ============================================================================== 00:10:23.113 Range in us Cumulative IO count 00:10:23.113 9115.462 - 9175.040: 0.0091% ( 1) 00:10:23.113 9234.618 - 9294.196: 0.0182% ( 1) 00:10:23.113 9294.196 - 9353.775: 0.0908% ( 8) 00:10:23.113 9353.775 - 9413.353: 0.1999% ( 12) 00:10:23.113 9413.353 - 9472.931: 0.4179% ( 24) 00:10:23.113 9472.931 - 9532.509: 0.5996% ( 20) 00:10:23.113 9532.509 - 9592.087: 0.7903% ( 21) 00:10:23.113 9592.087 - 9651.665: 1.0629% ( 30) 00:10:23.113 9651.665 - 9711.244: 1.4081% ( 38) 00:10:23.113 9711.244 - 9770.822: 1.8078% ( 44) 00:10:23.113 9770.822 - 9830.400: 2.1984% ( 43) 00:10:23.113 9830.400 - 9889.978: 2.6980% ( 55) 00:10:23.113 9889.978 - 9949.556: 3.2522% ( 61) 00:10:23.113 9949.556 - 10009.135: 3.8063% ( 61) 00:10:23.113 10009.135 - 10068.713: 4.6512% ( 93) 00:10:23.113 10068.713 - 10128.291: 5.6323% ( 108) 00:10:23.113 10128.291 - 10187.869: 6.9313% ( 143) 00:10:23.113 10187.869 - 10247.447: 8.4393% ( 166) 00:10:23.113 10247.447 - 10307.025: 10.1744% ( 191) 00:10:23.113 10307.025 - 10366.604: 11.8368% ( 183) 00:10:23.113 10366.604 - 10426.182: 13.5538% ( 189) 00:10:23.113 10426.182 - 10485.760: 15.2980% ( 192) 00:10:23.113 10485.760 - 10545.338: 17.0785% ( 196) 00:10:23.113 10545.338 - 10604.916: 18.9317% ( 204) 00:10:23.113 10604.916 - 10664.495: 20.9666% ( 224) 00:10:23.113 10664.495 - 10724.073: 22.8834% ( 211) 00:10:23.113 10724.073 - 10783.651: 24.8728% ( 219) 00:10:23.113 10783.651 - 10843.229: 26.8805% ( 221) 00:10:23.113 10843.229 - 10902.807: 29.0062% ( 234) 00:10:23.113 10902.807 - 10962.385: 31.1955% ( 241) 00:10:23.113 10962.385 - 11021.964: 33.4757% ( 251) 00:10:23.113 11021.964 - 11081.542: 35.8194% ( 258) 00:10:23.113 11081.542 - 11141.120: 38.0996% ( 251) 00:10:23.113 11141.120 - 11200.698: 40.4161% ( 255) 00:10:23.113 11200.698 - 11260.276: 42.7961% ( 262) 00:10:23.113 11260.276 - 11319.855: 44.9855% ( 241) 00:10:23.113 11319.855 - 11379.433: 47.1657% ( 240) 00:10:23.113 11379.433 - 11439.011: 49.4913% ( 256) 00:10:23.113 11439.011 - 11498.589: 51.7078% ( 244) 00:10:23.113 11498.589 - 11558.167: 53.9608% ( 248) 00:10:23.113 11558.167 - 11617.745: 56.2409% ( 251) 00:10:23.113 11617.745 - 11677.324: 58.5302% ( 252) 00:10:23.113 11677.324 - 11736.902: 60.8103% ( 251) 00:10:23.113 11736.902 - 11796.480: 63.1904% ( 262) 00:10:23.113 11796.480 - 11856.058: 65.4251% ( 246) 00:10:23.113 11856.058 - 11915.636: 67.7144% ( 252) 00:10:23.113 11915.636 - 11975.215: 69.9037% ( 241) 00:10:23.113 11975.215 - 12034.793: 72.1112% ( 243) 00:10:23.113 12034.793 - 12094.371: 74.2460% ( 235) 00:10:23.113 12094.371 - 12153.949: 76.3263% ( 229) 00:10:23.113 12153.949 - 12213.527: 78.3975% ( 228) 00:10:23.113 12213.527 - 12273.105: 80.4052% ( 221) 00:10:23.113 12273.105 - 12332.684: 82.3129% ( 210) 00:10:23.113 12332.684 - 12392.262: 84.0661% ( 193) 00:10:23.113 12392.262 - 12451.840: 85.8012% ( 191) 00:10:23.113 12451.840 - 12511.418: 87.2275% ( 157) 00:10:23.113 12511.418 - 12570.996: 88.5356% ( 144) 00:10:23.113 12570.996 - 12630.575: 89.7711% ( 136) 00:10:23.113 12630.575 - 12690.153: 90.8794% ( 122) 00:10:23.113 12690.153 - 12749.731: 91.9604% ( 119) 00:10:23.113 12749.731 - 12809.309: 92.8507% ( 98) 00:10:23.113 12809.309 - 12868.887: 93.7682% ( 101) 00:10:23.113 12868.887 - 12928.465: 94.4495% ( 75) 00:10:23.113 12928.465 - 12988.044: 95.0945% ( 71) 00:10:23.113 12988.044 - 13047.622: 95.5850% ( 54) 00:10:23.113 13047.622 - 13107.200: 96.0665% ( 53) 00:10:23.113 13107.200 - 13166.778: 96.4753% ( 45) 00:10:23.113 13166.778 - 13226.356: 96.8477% ( 41) 00:10:23.113 13226.356 - 13285.935: 97.1839% ( 37) 00:10:23.113 13285.935 - 13345.513: 97.4473% ( 29) 00:10:23.113 13345.513 - 13405.091: 97.6926% ( 27) 00:10:23.113 13405.091 - 13464.669: 97.9015% ( 23) 00:10:23.113 13464.669 - 13524.247: 98.1014% ( 22) 00:10:23.113 13524.247 - 13583.825: 98.2649% ( 18) 00:10:23.113 13583.825 - 13643.404: 98.4102% ( 16) 00:10:23.113 13643.404 - 13702.982: 98.5465% ( 15) 00:10:23.113 13702.982 - 13762.560: 98.6010% ( 6) 00:10:23.113 13762.560 - 13822.138: 98.6555% ( 6) 00:10:23.113 13822.138 - 13881.716: 98.7191% ( 7) 00:10:23.113 13881.716 - 13941.295: 98.7555% ( 4) 00:10:23.113 13941.295 - 14000.873: 98.7827% ( 3) 00:10:23.113 14000.873 - 14060.451: 98.8009% ( 2) 00:10:23.113 14060.451 - 14120.029: 98.8281% ( 3) 00:10:23.113 14120.029 - 14179.607: 98.8372% ( 1) 00:10:23.113 26333.556 - 26452.713: 98.8554% ( 2) 00:10:23.113 26452.713 - 26571.869: 98.8735% ( 2) 00:10:23.113 26571.869 - 26691.025: 98.9099% ( 4) 00:10:23.113 26691.025 - 26810.182: 98.9371% ( 3) 00:10:23.113 26810.182 - 26929.338: 98.9644% ( 3) 00:10:23.113 26929.338 - 27048.495: 98.9916% ( 3) 00:10:23.113 27048.495 - 27167.651: 99.0189% ( 3) 00:10:23.113 27167.651 - 27286.807: 99.0552% ( 4) 00:10:23.113 27286.807 - 27405.964: 99.0825% ( 3) 00:10:23.113 27405.964 - 27525.120: 99.1097% ( 3) 00:10:23.113 27525.120 - 27644.276: 99.1461% ( 4) 00:10:23.113 27644.276 - 27763.433: 99.1733% ( 3) 00:10:23.113 27763.433 - 27882.589: 99.2097% ( 4) 00:10:23.113 27882.589 - 28001.745: 99.2278% ( 2) 00:10:23.113 28001.745 - 28120.902: 99.2551% ( 3) 00:10:23.113 28120.902 - 28240.058: 99.2823% ( 3) 00:10:23.113 28240.058 - 28359.215: 99.3096% ( 3) 00:10:23.113 28359.215 - 28478.371: 99.3459% ( 4) 00:10:23.113 28478.371 - 28597.527: 99.3732% ( 3) 00:10:23.113 28597.527 - 28716.684: 99.4004% ( 3) 00:10:23.113 28716.684 - 28835.840: 99.4277% ( 3) 00:10:23.113 28835.840 - 28954.996: 99.4640% ( 4) 00:10:23.113 28954.996 - 29074.153: 99.4913% ( 3) 00:10:23.113 29074.153 - 29193.309: 99.5094% ( 2) 00:10:23.113 29193.309 - 29312.465: 99.5367% ( 3) 00:10:23.113 29312.465 - 29431.622: 99.5640% ( 3) 00:10:23.113 29431.622 - 29550.778: 99.5912% ( 3) 00:10:23.113 29550.778 - 29669.935: 99.6275% ( 4) 00:10:23.113 29669.935 - 29789.091: 99.6548% ( 3) 00:10:23.113 29789.091 - 29908.247: 99.6820% ( 3) 00:10:23.113 29908.247 - 30027.404: 99.7093% ( 3) 00:10:23.113 30027.404 - 30146.560: 99.7456% ( 4) 00:10:23.113 30146.560 - 30265.716: 99.7729% ( 3) 00:10:23.113 30265.716 - 30384.873: 99.8092% ( 4) 00:10:23.113 30384.873 - 30504.029: 99.8365% ( 3) 00:10:23.113 30504.029 - 30742.342: 99.8910% ( 6) 00:10:23.113 30742.342 - 30980.655: 99.9455% ( 6) 00:10:23.113 30980.655 - 31218.967: 100.0000% ( 6) 00:10:23.113 00:10:23.113 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:23.113 ============================================================================== 00:10:23.113 Range in us Cumulative IO count 00:10:23.113 9353.775 - 9413.353: 0.0091% ( 1) 00:10:23.113 9413.353 - 9472.931: 0.1181% ( 12) 00:10:23.113 9472.931 - 9532.509: 0.1817% ( 7) 00:10:23.113 9532.509 - 9592.087: 0.3361% ( 17) 00:10:23.113 9592.087 - 9651.665: 0.6450% ( 34) 00:10:23.113 9651.665 - 9711.244: 1.0719% ( 47) 00:10:23.113 9711.244 - 9770.822: 1.4989% ( 47) 00:10:23.113 9770.822 - 9830.400: 1.8986% ( 44) 00:10:23.113 9830.400 - 9889.978: 2.3801% ( 53) 00:10:23.113 9889.978 - 9949.556: 3.0705% ( 76) 00:10:23.113 9949.556 - 10009.135: 3.7427% ( 74) 00:10:23.113 10009.135 - 10068.713: 4.6421% ( 99) 00:10:23.113 10068.713 - 10128.291: 5.5323% ( 98) 00:10:23.113 10128.291 - 10187.869: 6.5861% ( 116) 00:10:23.113 10187.869 - 10247.447: 7.7671% ( 130) 00:10:23.113 10247.447 - 10307.025: 9.1570% ( 153) 00:10:23.113 10307.025 - 10366.604: 10.7649% ( 177) 00:10:23.113 10366.604 - 10426.182: 12.4364% ( 184) 00:10:23.113 10426.182 - 10485.760: 14.1352% ( 187) 00:10:23.113 10485.760 - 10545.338: 15.8158% ( 185) 00:10:23.113 10545.338 - 10604.916: 17.9142% ( 231) 00:10:23.113 10604.916 - 10664.495: 20.0036% ( 230) 00:10:23.113 10664.495 - 10724.073: 22.1475% ( 236) 00:10:23.113 10724.073 - 10783.651: 24.2914% ( 236) 00:10:23.113 10783.651 - 10843.229: 26.4989% ( 243) 00:10:23.113 10843.229 - 10902.807: 28.7336% ( 246) 00:10:23.113 10902.807 - 10962.385: 30.9684% ( 246) 00:10:23.113 10962.385 - 11021.964: 33.2576% ( 252) 00:10:23.113 11021.964 - 11081.542: 35.5650% ( 254) 00:10:23.113 11081.542 - 11141.120: 37.9906% ( 267) 00:10:23.113 11141.120 - 11200.698: 40.3161% ( 256) 00:10:23.113 11200.698 - 11260.276: 42.6054% ( 252) 00:10:23.113 11260.276 - 11319.855: 44.8765% ( 250) 00:10:23.113 11319.855 - 11379.433: 47.1566% ( 251) 00:10:23.113 11379.433 - 11439.011: 49.4731% ( 255) 00:10:23.113 11439.011 - 11498.589: 51.7896% ( 255) 00:10:23.113 11498.589 - 11558.167: 54.1606% ( 261) 00:10:23.113 11558.167 - 11617.745: 56.4862% ( 256) 00:10:23.113 11617.745 - 11677.324: 58.8118% ( 256) 00:10:23.113 11677.324 - 11736.902: 61.2009% ( 263) 00:10:23.113 11736.902 - 11796.480: 63.4902% ( 252) 00:10:23.113 11796.480 - 11856.058: 65.8067% ( 255) 00:10:23.113 11856.058 - 11915.636: 68.0142% ( 243) 00:10:23.113 11915.636 - 11975.215: 70.1944% ( 240) 00:10:23.113 11975.215 - 12034.793: 72.3928% ( 242) 00:10:23.113 12034.793 - 12094.371: 74.5640% ( 239) 00:10:23.113 12094.371 - 12153.949: 76.6897% ( 234) 00:10:23.113 12153.949 - 12213.527: 78.6882% ( 220) 00:10:23.113 12213.527 - 12273.105: 80.6232% ( 213) 00:10:23.113 12273.105 - 12332.684: 82.4945% ( 206) 00:10:23.113 12332.684 - 12392.262: 84.2478% ( 193) 00:10:23.113 12392.262 - 12451.840: 85.9648% ( 189) 00:10:23.113 12451.840 - 12511.418: 87.4273% ( 161) 00:10:23.113 12511.418 - 12570.996: 88.8717% ( 159) 00:10:23.113 12570.996 - 12630.575: 90.2253% ( 149) 00:10:23.113 12630.575 - 12690.153: 91.3154% ( 120) 00:10:23.113 12690.153 - 12749.731: 92.3964% ( 119) 00:10:23.113 12749.731 - 12809.309: 93.3412% ( 104) 00:10:23.113 12809.309 - 12868.887: 94.1951% ( 94) 00:10:23.113 12868.887 - 12928.465: 94.8946% ( 77) 00:10:23.113 12928.465 - 12988.044: 95.5033% ( 67) 00:10:23.113 12988.044 - 13047.622: 96.0029% ( 55) 00:10:23.113 13047.622 - 13107.200: 96.4390% ( 48) 00:10:23.113 13107.200 - 13166.778: 96.8205% ( 42) 00:10:23.113 13166.778 - 13226.356: 97.1112% ( 32) 00:10:23.113 13226.356 - 13285.935: 97.3383% ( 25) 00:10:23.113 13285.935 - 13345.513: 97.5654% ( 25) 00:10:23.113 13345.513 - 13405.091: 97.7743% ( 23) 00:10:23.113 13405.091 - 13464.669: 97.9469% ( 19) 00:10:23.113 13464.669 - 13524.247: 98.0832% ( 15) 00:10:23.113 13524.247 - 13583.825: 98.1922% ( 12) 00:10:23.113 13583.825 - 13643.404: 98.3012% ( 12) 00:10:23.113 13643.404 - 13702.982: 98.4102% ( 12) 00:10:23.113 13702.982 - 13762.560: 98.5011% ( 10) 00:10:23.113 13762.560 - 13822.138: 98.5738% ( 8) 00:10:23.113 13822.138 - 13881.716: 98.6464% ( 8) 00:10:23.113 13881.716 - 13941.295: 98.7373% ( 10) 00:10:23.113 13941.295 - 14000.873: 98.7918% ( 6) 00:10:23.113 14000.873 - 14060.451: 98.8281% ( 4) 00:10:23.113 14060.451 - 14120.029: 98.8372% ( 1) 00:10:23.113 24665.367 - 24784.524: 98.8463% ( 1) 00:10:23.113 24784.524 - 24903.680: 98.8645% ( 2) 00:10:23.113 24903.680 - 25022.836: 98.9008% ( 4) 00:10:23.113 25022.836 - 25141.993: 98.9281% ( 3) 00:10:23.113 25141.993 - 25261.149: 98.9553% ( 3) 00:10:23.113 25261.149 - 25380.305: 98.9916% ( 4) 00:10:23.113 25380.305 - 25499.462: 99.0189% ( 3) 00:10:23.113 25499.462 - 25618.618: 99.0461% ( 3) 00:10:23.113 25618.618 - 25737.775: 99.0734% ( 3) 00:10:23.113 25737.775 - 25856.931: 99.1007% ( 3) 00:10:23.113 25856.931 - 25976.087: 99.1370% ( 4) 00:10:23.113 25976.087 - 26095.244: 99.1642% ( 3) 00:10:23.113 26095.244 - 26214.400: 99.1915% ( 3) 00:10:23.113 26214.400 - 26333.556: 99.2278% ( 4) 00:10:23.113 26333.556 - 26452.713: 99.2551% ( 3) 00:10:23.113 26452.713 - 26571.869: 99.2914% ( 4) 00:10:23.113 26571.869 - 26691.025: 99.3187% ( 3) 00:10:23.113 26691.025 - 26810.182: 99.3550% ( 4) 00:10:23.113 26810.182 - 26929.338: 99.3732% ( 2) 00:10:23.113 26929.338 - 27048.495: 99.4004% ( 3) 00:10:23.114 27048.495 - 27167.651: 99.4277% ( 3) 00:10:23.114 27167.651 - 27286.807: 99.4640% ( 4) 00:10:23.114 27286.807 - 27405.964: 99.4913% ( 3) 00:10:23.114 27405.964 - 27525.120: 99.5185% ( 3) 00:10:23.114 27525.120 - 27644.276: 99.5367% ( 2) 00:10:23.114 27644.276 - 27763.433: 99.5730% ( 4) 00:10:23.114 27763.433 - 27882.589: 99.6003% ( 3) 00:10:23.114 27882.589 - 28001.745: 99.6366% ( 4) 00:10:23.114 28001.745 - 28120.902: 99.6639% ( 3) 00:10:23.114 28120.902 - 28240.058: 99.6911% ( 3) 00:10:23.114 28240.058 - 28359.215: 99.7184% ( 3) 00:10:23.114 28359.215 - 28478.371: 99.7547% ( 4) 00:10:23.114 28478.371 - 28597.527: 99.7820% ( 3) 00:10:23.114 28597.527 - 28716.684: 99.8092% ( 3) 00:10:23.114 28716.684 - 28835.840: 99.8365% ( 3) 00:10:23.114 28835.840 - 28954.996: 99.8637% ( 3) 00:10:23.114 28954.996 - 29074.153: 99.9001% ( 4) 00:10:23.114 29074.153 - 29193.309: 99.9364% ( 4) 00:10:23.114 29193.309 - 29312.465: 99.9637% ( 3) 00:10:23.114 29312.465 - 29431.622: 99.9909% ( 3) 00:10:23.114 29431.622 - 29550.778: 100.0000% ( 1) 00:10:23.114 00:10:23.114 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:23.114 ============================================================================== 00:10:23.114 Range in us Cumulative IO count 00:10:23.114 9413.353 - 9472.931: 0.0273% ( 3) 00:10:23.114 9472.931 - 9532.509: 0.0908% ( 7) 00:10:23.114 9532.509 - 9592.087: 0.2362% ( 16) 00:10:23.114 9592.087 - 9651.665: 0.4542% ( 24) 00:10:23.114 9651.665 - 9711.244: 0.7631% ( 34) 00:10:23.114 9711.244 - 9770.822: 1.2264% ( 51) 00:10:23.114 9770.822 - 9830.400: 1.6443% ( 46) 00:10:23.114 9830.400 - 9889.978: 2.1711% ( 58) 00:10:23.114 9889.978 - 9949.556: 2.8434% ( 74) 00:10:23.114 9949.556 - 10009.135: 3.4611% ( 68) 00:10:23.114 10009.135 - 10068.713: 4.1334% ( 74) 00:10:23.114 10068.713 - 10128.291: 4.9964% ( 95) 00:10:23.114 10128.291 - 10187.869: 5.9866% ( 109) 00:10:23.114 10187.869 - 10247.447: 7.2311% ( 137) 00:10:23.114 10247.447 - 10307.025: 9.0116% ( 196) 00:10:23.114 10307.025 - 10366.604: 10.9193% ( 210) 00:10:23.114 10366.604 - 10426.182: 13.0178% ( 231) 00:10:23.114 10426.182 - 10485.760: 14.9437% ( 212) 00:10:23.114 10485.760 - 10545.338: 16.7605% ( 200) 00:10:23.114 10545.338 - 10604.916: 18.5229% ( 194) 00:10:23.114 10604.916 - 10664.495: 20.3307% ( 199) 00:10:23.114 10664.495 - 10724.073: 22.3110% ( 218) 00:10:23.114 10724.073 - 10783.651: 24.5821% ( 250) 00:10:23.114 10783.651 - 10843.229: 26.8350% ( 248) 00:10:23.114 10843.229 - 10902.807: 28.9062% ( 228) 00:10:23.114 10902.807 - 10962.385: 30.9048% ( 220) 00:10:23.114 10962.385 - 11021.964: 33.0305% ( 234) 00:10:23.114 11021.964 - 11081.542: 35.1472% ( 233) 00:10:23.114 11081.542 - 11141.120: 37.6181% ( 272) 00:10:23.114 11141.120 - 11200.698: 40.0799% ( 271) 00:10:23.114 11200.698 - 11260.276: 42.5872% ( 276) 00:10:23.114 11260.276 - 11319.855: 45.0218% ( 268) 00:10:23.114 11319.855 - 11379.433: 47.3746% ( 259) 00:10:23.114 11379.433 - 11439.011: 49.8001% ( 267) 00:10:23.114 11439.011 - 11498.589: 52.1166% ( 255) 00:10:23.114 11498.589 - 11558.167: 54.3968% ( 251) 00:10:23.114 11558.167 - 11617.745: 56.7678% ( 261) 00:10:23.114 11617.745 - 11677.324: 59.0661% ( 253) 00:10:23.114 11677.324 - 11736.902: 61.4462% ( 262) 00:10:23.114 11736.902 - 11796.480: 63.6991% ( 248) 00:10:23.114 11796.480 - 11856.058: 65.9430% ( 247) 00:10:23.114 11856.058 - 11915.636: 68.1686% ( 245) 00:10:23.114 11915.636 - 11975.215: 70.3125% ( 236) 00:10:23.114 11975.215 - 12034.793: 72.4927% ( 240) 00:10:23.114 12034.793 - 12094.371: 74.5912% ( 231) 00:10:23.114 12094.371 - 12153.949: 76.7260% ( 235) 00:10:23.114 12153.949 - 12213.527: 78.6973% ( 217) 00:10:23.114 12213.527 - 12273.105: 80.6777% ( 218) 00:10:23.114 12273.105 - 12332.684: 82.5491% ( 206) 00:10:23.114 12332.684 - 12392.262: 84.3114% ( 194) 00:10:23.114 12392.262 - 12451.840: 86.1010% ( 197) 00:10:23.114 12451.840 - 12511.418: 87.6999% ( 176) 00:10:23.114 12511.418 - 12570.996: 89.0716% ( 151) 00:10:23.114 12570.996 - 12630.575: 90.1617% ( 120) 00:10:23.114 12630.575 - 12690.153: 91.2700% ( 122) 00:10:23.114 12690.153 - 12749.731: 92.2329% ( 106) 00:10:23.114 12749.731 - 12809.309: 93.0959% ( 95) 00:10:23.114 12809.309 - 12868.887: 93.8499% ( 83) 00:10:23.114 12868.887 - 12928.465: 94.4949% ( 71) 00:10:23.114 12928.465 - 12988.044: 95.0400% ( 60) 00:10:23.114 12988.044 - 13047.622: 95.5850% ( 60) 00:10:23.114 13047.622 - 13107.200: 96.0029% ( 46) 00:10:23.114 13107.200 - 13166.778: 96.4571% ( 50) 00:10:23.114 13166.778 - 13226.356: 96.8568% ( 44) 00:10:23.114 13226.356 - 13285.935: 97.1657% ( 34) 00:10:23.114 13285.935 - 13345.513: 97.4110% ( 27) 00:10:23.114 13345.513 - 13405.091: 97.6653% ( 28) 00:10:23.114 13405.091 - 13464.669: 97.8289% ( 18) 00:10:23.114 13464.669 - 13524.247: 97.9560% ( 14) 00:10:23.114 13524.247 - 13583.825: 98.0832% ( 14) 00:10:23.114 13583.825 - 13643.404: 98.1831% ( 11) 00:10:23.114 13643.404 - 13702.982: 98.2922% ( 12) 00:10:23.114 13702.982 - 13762.560: 98.3921% ( 11) 00:10:23.114 13762.560 - 13822.138: 98.4648% ( 8) 00:10:23.114 13822.138 - 13881.716: 98.5465% ( 9) 00:10:23.114 13881.716 - 13941.295: 98.6192% ( 8) 00:10:23.114 13941.295 - 14000.873: 98.6919% ( 8) 00:10:23.114 14000.873 - 14060.451: 98.7464% ( 6) 00:10:23.114 14060.451 - 14120.029: 98.7918% ( 5) 00:10:23.114 14120.029 - 14179.607: 98.8190% ( 3) 00:10:23.114 14179.607 - 14239.185: 98.8372% ( 2) 00:10:23.114 23831.273 - 23950.429: 98.8735% ( 4) 00:10:23.114 23950.429 - 24069.585: 98.9735% ( 11) 00:10:23.114 24069.585 - 24188.742: 99.0825% ( 12) 00:10:23.114 24188.742 - 24307.898: 99.1642% ( 9) 00:10:23.114 24307.898 - 24427.055: 99.1915% ( 3) 00:10:23.114 24427.055 - 24546.211: 99.2188% ( 3) 00:10:23.114 24546.211 - 24665.367: 99.2460% ( 3) 00:10:23.114 24665.367 - 24784.524: 99.2642% ( 2) 00:10:23.114 24784.524 - 24903.680: 99.3005% ( 4) 00:10:23.114 24903.680 - 25022.836: 99.3278% ( 3) 00:10:23.114 25022.836 - 25141.993: 99.3459% ( 2) 00:10:23.114 25141.993 - 25261.149: 99.3732% ( 3) 00:10:23.114 25261.149 - 25380.305: 99.4004% ( 3) 00:10:23.114 25380.305 - 25499.462: 99.4277% ( 3) 00:10:23.114 25499.462 - 25618.618: 99.4549% ( 3) 00:10:23.114 25618.618 - 25737.775: 99.4822% ( 3) 00:10:23.114 25737.775 - 25856.931: 99.5004% ( 2) 00:10:23.114 25856.931 - 25976.087: 99.5276% ( 3) 00:10:23.114 25976.087 - 26095.244: 99.5549% ( 3) 00:10:23.114 26095.244 - 26214.400: 99.5821% ( 3) 00:10:23.114 26214.400 - 26333.556: 99.6094% ( 3) 00:10:23.114 26333.556 - 26452.713: 99.6366% ( 3) 00:10:23.114 26452.713 - 26571.869: 99.6730% ( 4) 00:10:23.114 26571.869 - 26691.025: 99.7002% ( 3) 00:10:23.114 26691.025 - 26810.182: 99.7275% ( 3) 00:10:23.114 26810.182 - 26929.338: 99.7547% ( 3) 00:10:23.114 26929.338 - 27048.495: 99.7911% ( 4) 00:10:23.114 27048.495 - 27167.651: 99.8092% ( 2) 00:10:23.114 27167.651 - 27286.807: 99.8365% ( 3) 00:10:23.114 27286.807 - 27405.964: 99.8637% ( 3) 00:10:23.114 27405.964 - 27525.120: 99.8910% ( 3) 00:10:23.114 27525.120 - 27644.276: 99.9273% ( 4) 00:10:23.114 27644.276 - 27763.433: 99.9546% ( 3) 00:10:23.114 27763.433 - 27882.589: 99.9909% ( 4) 00:10:23.114 27882.589 - 28001.745: 100.0000% ( 1) 00:10:23.114 00:10:23.114 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:23.114 ============================================================================== 00:10:23.114 Range in us Cumulative IO count 00:10:23.114 9294.196 - 9353.775: 0.0545% ( 6) 00:10:23.114 9353.775 - 9413.353: 0.1272% ( 8) 00:10:23.114 9413.353 - 9472.931: 0.2180% ( 10) 00:10:23.114 9472.931 - 9532.509: 0.3452% ( 14) 00:10:23.114 9532.509 - 9592.087: 0.5360% ( 21) 00:10:23.114 9592.087 - 9651.665: 0.6177% ( 9) 00:10:23.114 9651.665 - 9711.244: 0.7631% ( 16) 00:10:23.114 9711.244 - 9770.822: 1.1265% ( 40) 00:10:23.114 9770.822 - 9830.400: 1.6443% ( 57) 00:10:23.114 9830.400 - 9889.978: 2.2892% ( 71) 00:10:23.114 9889.978 - 9949.556: 2.9978% ( 78) 00:10:23.114 9949.556 - 10009.135: 3.8699% ( 96) 00:10:23.114 10009.135 - 10068.713: 4.8964% ( 113) 00:10:23.114 10068.713 - 10128.291: 5.8503% ( 105) 00:10:23.114 10128.291 - 10187.869: 6.7769% ( 102) 00:10:23.114 10187.869 - 10247.447: 7.7035% ( 102) 00:10:23.114 10247.447 - 10307.025: 8.7391% ( 114) 00:10:23.114 10307.025 - 10366.604: 10.1926% ( 160) 00:10:23.114 10366.604 - 10426.182: 11.9368% ( 192) 00:10:23.114 10426.182 - 10485.760: 13.9081% ( 217) 00:10:23.114 10485.760 - 10545.338: 16.1610% ( 248) 00:10:23.114 10545.338 - 10604.916: 18.2685% ( 232) 00:10:23.114 10604.916 - 10664.495: 20.3761% ( 232) 00:10:23.114 10664.495 - 10724.073: 22.3656% ( 219) 00:10:23.114 10724.073 - 10783.651: 24.4368% ( 228) 00:10:23.114 10783.651 - 10843.229: 26.6261% ( 241) 00:10:23.114 10843.229 - 10902.807: 28.8245% ( 242) 00:10:23.114 10902.807 - 10962.385: 31.0956% ( 250) 00:10:23.114 10962.385 - 11021.964: 33.1850% ( 230) 00:10:23.114 11021.964 - 11081.542: 35.3379% ( 237) 00:10:23.114 11081.542 - 11141.120: 37.3728% ( 224) 00:10:23.114 11141.120 - 11200.698: 39.5803% ( 243) 00:10:23.114 11200.698 - 11260.276: 41.9331% ( 259) 00:10:23.114 11260.276 - 11319.855: 44.1497% ( 244) 00:10:23.114 11319.855 - 11379.433: 46.5116% ( 260) 00:10:23.114 11379.433 - 11439.011: 48.8645% ( 259) 00:10:23.114 11439.011 - 11498.589: 51.3536% ( 274) 00:10:23.114 11498.589 - 11558.167: 53.6973% ( 258) 00:10:23.114 11558.167 - 11617.745: 55.9048% ( 243) 00:10:23.114 11617.745 - 11677.324: 58.3031% ( 264) 00:10:23.114 11677.324 - 11736.902: 60.7104% ( 265) 00:10:23.114 11736.902 - 11796.480: 63.0269% ( 255) 00:10:23.114 11796.480 - 11856.058: 65.3797% ( 259) 00:10:23.114 11856.058 - 11915.636: 67.6781% ( 253) 00:10:23.114 11915.636 - 11975.215: 69.9582% ( 251) 00:10:23.114 11975.215 - 12034.793: 72.2475% ( 252) 00:10:23.114 12034.793 - 12094.371: 74.3278% ( 229) 00:10:23.114 12094.371 - 12153.949: 76.4081% ( 229) 00:10:23.114 12153.949 - 12213.527: 78.4430% ( 224) 00:10:23.114 12213.527 - 12273.105: 80.3779% ( 213) 00:10:23.114 12273.105 - 12332.684: 82.3310% ( 215) 00:10:23.114 12332.684 - 12392.262: 84.1297% ( 198) 00:10:23.114 12392.262 - 12451.840: 85.9193% ( 197) 00:10:23.114 12451.840 - 12511.418: 87.5545% ( 180) 00:10:23.114 12511.418 - 12570.996: 88.9717% ( 156) 00:10:23.114 12570.996 - 12630.575: 90.3070% ( 147) 00:10:23.114 12630.575 - 12690.153: 91.2882% ( 108) 00:10:23.114 12690.153 - 12749.731: 92.2420% ( 105) 00:10:23.114 12749.731 - 12809.309: 93.0959% ( 94) 00:10:23.114 12809.309 - 12868.887: 93.8863% ( 87) 00:10:23.114 12868.887 - 12928.465: 94.6130% ( 80) 00:10:23.114 12928.465 - 12988.044: 95.2398% ( 69) 00:10:23.114 12988.044 - 13047.622: 95.7758% ( 59) 00:10:23.114 13047.622 - 13107.200: 96.2300% ( 50) 00:10:23.114 13107.200 - 13166.778: 96.6297% ( 44) 00:10:23.114 13166.778 - 13226.356: 96.9931% ( 40) 00:10:23.114 13226.356 - 13285.935: 97.3110% ( 35) 00:10:23.114 13285.935 - 13345.513: 97.5927% ( 31) 00:10:23.114 13345.513 - 13405.091: 97.8379% ( 27) 00:10:23.114 13405.091 - 13464.669: 98.0196% ( 20) 00:10:23.114 13464.669 - 13524.247: 98.1831% ( 18) 00:10:23.114 13524.247 - 13583.825: 98.3557% ( 19) 00:10:23.114 13583.825 - 13643.404: 98.4738% ( 13) 00:10:23.114 13643.404 - 13702.982: 98.5465% ( 8) 00:10:23.114 13702.982 - 13762.560: 98.5919% ( 5) 00:10:23.114 13762.560 - 13822.138: 98.6374% ( 5) 00:10:23.114 13822.138 - 13881.716: 98.6737% ( 4) 00:10:23.114 13881.716 - 13941.295: 98.7100% ( 4) 00:10:23.114 13941.295 - 14000.873: 98.7282% ( 2) 00:10:23.114 14000.873 - 14060.451: 98.7555% ( 3) 00:10:23.114 14060.451 - 14120.029: 98.7736% ( 2) 00:10:23.114 14120.029 - 14179.607: 98.7918% ( 2) 00:10:23.114 14179.607 - 14239.185: 98.8190% ( 3) 00:10:23.114 14239.185 - 14298.764: 98.8372% ( 2) 00:10:23.114 21567.302 - 21686.458: 98.9008% ( 7) 00:10:23.114 21686.458 - 21805.615: 98.9281% ( 3) 00:10:23.114 21805.615 - 21924.771: 98.9553% ( 3) 00:10:23.114 21924.771 - 22043.927: 98.9826% ( 3) 00:10:23.114 22043.927 - 22163.084: 99.0098% ( 3) 00:10:23.114 22163.084 - 22282.240: 99.0371% ( 3) 00:10:23.114 22282.240 - 22401.396: 99.0643% ( 3) 00:10:23.114 22401.396 - 22520.553: 99.1007% ( 4) 00:10:23.114 22520.553 - 22639.709: 99.1279% ( 3) 00:10:23.114 22639.709 - 22758.865: 99.1552% ( 3) 00:10:23.114 22758.865 - 22878.022: 99.1824% ( 3) 00:10:23.114 22878.022 - 22997.178: 99.2188% ( 4) 00:10:23.114 22997.178 - 23116.335: 99.2460% ( 3) 00:10:23.114 23116.335 - 23235.491: 99.2733% ( 3) 00:10:23.114 23235.491 - 23354.647: 99.3005% ( 3) 00:10:23.114 23354.647 - 23473.804: 99.3368% ( 4) 00:10:23.114 23473.804 - 23592.960: 99.3641% ( 3) 00:10:23.114 23592.960 - 23712.116: 99.3914% ( 3) 00:10:23.114 23712.116 - 23831.273: 99.4186% ( 3) 00:10:23.114 23831.273 - 23950.429: 99.4549% ( 4) 00:10:23.115 23950.429 - 24069.585: 99.4822% ( 3) 00:10:23.115 24069.585 - 24188.742: 99.5094% ( 3) 00:10:23.115 24188.742 - 24307.898: 99.5458% ( 4) 00:10:23.115 24307.898 - 24427.055: 99.5730% ( 3) 00:10:23.115 24427.055 - 24546.211: 99.6094% ( 4) 00:10:23.115 24546.211 - 24665.367: 99.6366% ( 3) 00:10:23.115 24665.367 - 24784.524: 99.6639% ( 3) 00:10:23.115 24784.524 - 24903.680: 99.6911% ( 3) 00:10:23.115 24903.680 - 25022.836: 99.7184% ( 3) 00:10:23.115 25022.836 - 25141.993: 99.7547% ( 4) 00:10:23.115 25141.993 - 25261.149: 99.7820% ( 3) 00:10:23.115 25261.149 - 25380.305: 99.8092% ( 3) 00:10:23.115 25380.305 - 25499.462: 99.8365% ( 3) 00:10:23.115 25499.462 - 25618.618: 99.8637% ( 3) 00:10:23.115 25618.618 - 25737.775: 99.8910% ( 3) 00:10:23.115 25737.775 - 25856.931: 99.9273% ( 4) 00:10:23.115 25856.931 - 25976.087: 99.9546% ( 3) 00:10:23.115 25976.087 - 26095.244: 99.9909% ( 4) 00:10:23.115 26095.244 - 26214.400: 100.0000% ( 1) 00:10:23.115 00:10:23.115 21:01:37 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:23.115 ************************************ 00:10:23.115 END TEST nvme_perf 00:10:23.115 ************************************ 00:10:23.115 00:10:23.115 real 0m2.911s 00:10:23.115 user 0m2.482s 00:10:23.115 sys 0m0.291s 00:10:23.115 21:01:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.115 21:01:37 -- common/autotest_common.sh@10 -- # set +x 00:10:23.374 21:01:37 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:23.374 21:01:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:23.374 21:01:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:23.374 21:01:37 -- common/autotest_common.sh@10 -- # set +x 00:10:23.374 ************************************ 00:10:23.374 START TEST nvme_hello_world 00:10:23.374 ************************************ 00:10:23.374 21:01:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:23.634 Initializing NVMe Controllers 00:10:23.634 Attached to 0000:00:06.0 00:10:23.634 Namespace ID: 1 size: 6GB 00:10:23.634 Attached to 0000:00:07.0 00:10:23.634 Namespace ID: 1 size: 5GB 00:10:23.634 Attached to 0000:00:09.0 00:10:23.634 Namespace ID: 1 size: 1GB 00:10:23.634 Attached to 0000:00:08.0 00:10:23.634 Namespace ID: 1 size: 4GB 00:10:23.634 Namespace ID: 2 size: 4GB 00:10:23.634 Namespace ID: 3 size: 4GB 00:10:23.634 Initialization complete. 00:10:23.634 INFO: using host memory buffer for IO 00:10:23.634 Hello world! 00:10:23.634 INFO: using host memory buffer for IO 00:10:23.634 Hello world! 00:10:23.634 INFO: using host memory buffer for IO 00:10:23.634 Hello world! 00:10:23.634 INFO: using host memory buffer for IO 00:10:23.634 Hello world! 00:10:23.634 INFO: using host memory buffer for IO 00:10:23.634 Hello world! 00:10:23.634 INFO: using host memory buffer for IO 00:10:23.634 Hello world! 00:10:23.634 ************************************ 00:10:23.634 END TEST nvme_hello_world 00:10:23.634 ************************************ 00:10:23.634 00:10:23.634 real 0m0.388s 00:10:23.634 user 0m0.202s 00:10:23.634 sys 0m0.141s 00:10:23.634 21:01:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.634 21:01:37 -- common/autotest_common.sh@10 -- # set +x 00:10:23.634 21:01:37 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:23.634 21:01:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:23.634 21:01:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:23.634 21:01:37 -- common/autotest_common.sh@10 -- # set +x 00:10:23.634 ************************************ 00:10:23.634 START TEST nvme_sgl 00:10:23.634 ************************************ 00:10:23.634 21:01:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:23.893 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:10:23.893 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:10:23.893 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:10:24.152 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:10:24.152 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:10:24.152 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:10:24.152 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:10:24.152 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:10:24.152 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:10:24.152 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:10:24.152 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:10:24.152 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:10:24.152 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:10:24.152 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:10:24.152 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:10:24.152 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:10:24.152 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:10:24.152 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:10:24.152 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:10:24.152 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:10:24.152 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:10:24.152 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:10:24.152 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:10:24.152 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:10:24.152 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:10:24.152 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:10:24.152 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:10:24.152 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:10:24.152 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:10:24.152 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:10:24.152 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:10:24.152 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:10:24.152 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:10:24.152 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:10:24.152 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:10:24.152 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:10:24.152 NVMe Readv/Writev Request test 00:10:24.152 Attached to 0000:00:06.0 00:10:24.152 Attached to 0000:00:07.0 00:10:24.152 Attached to 0000:00:09.0 00:10:24.152 Attached to 0000:00:08.0 00:10:24.152 0000:00:06.0: build_io_request_2 test passed 00:10:24.152 0000:00:06.0: build_io_request_4 test passed 00:10:24.152 0000:00:06.0: build_io_request_5 test passed 00:10:24.152 0000:00:06.0: build_io_request_6 test passed 00:10:24.152 0000:00:06.0: build_io_request_7 test passed 00:10:24.152 0000:00:06.0: build_io_request_10 test passed 00:10:24.152 0000:00:07.0: build_io_request_2 test passed 00:10:24.152 0000:00:07.0: build_io_request_4 test passed 00:10:24.152 0000:00:07.0: build_io_request_5 test passed 00:10:24.152 0000:00:07.0: build_io_request_6 test passed 00:10:24.152 0000:00:07.0: build_io_request_7 test passed 00:10:24.152 0000:00:07.0: build_io_request_10 test passed 00:10:24.152 Cleaning up... 00:10:24.152 00:10:24.152 real 0m0.541s 00:10:24.152 user 0m0.367s 00:10:24.152 sys 0m0.127s 00:10:24.152 21:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.152 21:01:38 -- common/autotest_common.sh@10 -- # set +x 00:10:24.152 ************************************ 00:10:24.152 END TEST nvme_sgl 00:10:24.152 ************************************ 00:10:24.411 21:01:38 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:24.411 21:01:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:24.411 21:01:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.411 21:01:38 -- common/autotest_common.sh@10 -- # set +x 00:10:24.411 ************************************ 00:10:24.411 START TEST nvme_e2edp 00:10:24.411 ************************************ 00:10:24.411 21:01:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:24.669 NVMe Write/Read with End-to-End data protection test 00:10:24.669 Attached to 0000:00:06.0 00:10:24.669 Attached to 0000:00:07.0 00:10:24.669 Attached to 0000:00:09.0 00:10:24.669 Attached to 0000:00:08.0 00:10:24.669 Cleaning up... 00:10:24.669 ************************************ 00:10:24.669 END TEST nvme_e2edp 00:10:24.669 ************************************ 00:10:24.669 00:10:24.669 real 0m0.272s 00:10:24.669 user 0m0.102s 00:10:24.669 sys 0m0.129s 00:10:24.669 21:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.669 21:01:38 -- common/autotest_common.sh@10 -- # set +x 00:10:24.669 21:01:38 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:24.669 21:01:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:24.669 21:01:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.669 21:01:38 -- common/autotest_common.sh@10 -- # set +x 00:10:24.669 ************************************ 00:10:24.669 START TEST nvme_reserve 00:10:24.669 ************************************ 00:10:24.669 21:01:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:24.928 ===================================================== 00:10:24.928 NVMe Controller at PCI bus 0, device 6, function 0 00:10:24.928 ===================================================== 00:10:24.928 Reservations: Not Supported 00:10:24.928 ===================================================== 00:10:24.928 NVMe Controller at PCI bus 0, device 7, function 0 00:10:24.928 ===================================================== 00:10:24.928 Reservations: Not Supported 00:10:24.928 ===================================================== 00:10:24.928 NVMe Controller at PCI bus 0, device 9, function 0 00:10:24.928 ===================================================== 00:10:24.928 Reservations: Not Supported 00:10:24.928 ===================================================== 00:10:24.928 NVMe Controller at PCI bus 0, device 8, function 0 00:10:24.928 ===================================================== 00:10:24.928 Reservations: Not Supported 00:10:24.928 Reservation test passed 00:10:24.928 ************************************ 00:10:24.928 END TEST nvme_reserve 00:10:24.928 ************************************ 00:10:24.928 00:10:24.928 real 0m0.269s 00:10:24.928 user 0m0.089s 00:10:24.928 sys 0m0.135s 00:10:24.928 21:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.928 21:01:38 -- common/autotest_common.sh@10 -- # set +x 00:10:24.928 21:01:38 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:24.928 21:01:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:24.928 21:01:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.928 21:01:38 -- common/autotest_common.sh@10 -- # set +x 00:10:24.928 ************************************ 00:10:24.928 START TEST nvme_err_injection 00:10:24.928 ************************************ 00:10:24.928 21:01:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:25.186 NVMe Error Injection test 00:10:25.186 Attached to 0000:00:06.0 00:10:25.186 Attached to 0000:00:07.0 00:10:25.186 Attached to 0000:00:09.0 00:10:25.186 Attached to 0000:00:08.0 00:10:25.186 0000:00:09.0: get features failed as expected 00:10:25.186 0000:00:08.0: get features failed as expected 00:10:25.186 0000:00:06.0: get features failed as expected 00:10:25.186 0000:00:07.0: get features failed as expected 00:10:25.186 0000:00:07.0: get features successfully as expected 00:10:25.186 0000:00:09.0: get features successfully as expected 00:10:25.186 0000:00:08.0: get features successfully as expected 00:10:25.186 0000:00:06.0: get features successfully as expected 00:10:25.186 0000:00:06.0: read failed as expected 00:10:25.186 0000:00:07.0: read failed as expected 00:10:25.186 0000:00:09.0: read failed as expected 00:10:25.186 0000:00:08.0: read failed as expected 00:10:25.186 0000:00:06.0: read successfully as expected 00:10:25.186 0000:00:07.0: read successfully as expected 00:10:25.186 0000:00:09.0: read successfully as expected 00:10:25.186 0000:00:08.0: read successfully as expected 00:10:25.186 Cleaning up... 00:10:25.186 ************************************ 00:10:25.186 END TEST nvme_err_injection 00:10:25.186 ************************************ 00:10:25.186 00:10:25.186 real 0m0.337s 00:10:25.186 user 0m0.155s 00:10:25.186 sys 0m0.133s 00:10:25.186 21:01:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.186 21:01:39 -- common/autotest_common.sh@10 -- # set +x 00:10:25.186 21:01:39 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:25.186 21:01:39 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:25.186 21:01:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.186 21:01:39 -- common/autotest_common.sh@10 -- # set +x 00:10:25.445 ************************************ 00:10:25.445 START TEST nvme_overhead 00:10:25.445 ************************************ 00:10:25.445 21:01:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:26.823 Initializing NVMe Controllers 00:10:26.823 Attached to 0000:00:06.0 00:10:26.823 Attached to 0000:00:07.0 00:10:26.823 Attached to 0000:00:09.0 00:10:26.823 Attached to 0000:00:08.0 00:10:26.823 Initialization complete. Launching workers. 00:10:26.823 submit (in ns) avg, min, max = 15974.9, 13000.0, 106694.1 00:10:26.823 complete (in ns) avg, min, max = 11515.8, 8934.5, 50929.5 00:10:26.823 00:10:26.823 Submit histogram 00:10:26.823 ================ 00:10:26.823 Range in us Cumulative Count 00:10:26.823 12.975 - 13.033: 0.0122% ( 1) 00:10:26.823 13.615 - 13.673: 0.0366% ( 2) 00:10:26.823 13.673 - 13.731: 0.0854% ( 4) 00:10:26.823 13.731 - 13.789: 0.1464% ( 5) 00:10:26.823 13.789 - 13.847: 0.3537% ( 17) 00:10:26.823 13.847 - 13.905: 0.9515% ( 49) 00:10:26.823 13.905 - 13.964: 1.9151% ( 79) 00:10:26.823 13.964 - 14.022: 3.6350% ( 141) 00:10:26.823 14.022 - 14.080: 5.4038% ( 145) 00:10:26.823 14.080 - 14.138: 7.3798% ( 162) 00:10:26.823 14.138 - 14.196: 9.1974% ( 149) 00:10:26.823 14.196 - 14.255: 10.7953% ( 131) 00:10:26.823 14.255 - 14.313: 13.3569% ( 210) 00:10:26.823 14.313 - 14.371: 17.4311% ( 334) 00:10:26.823 14.371 - 14.429: 23.1764% ( 471) 00:10:26.823 14.429 - 14.487: 29.3852% ( 509) 00:10:26.823 14.487 - 14.545: 35.8258% ( 528) 00:10:26.823 14.545 - 14.604: 40.0829% ( 349) 00:10:26.823 14.604 - 14.662: 43.1203% ( 249) 00:10:26.823 14.662 - 14.720: 45.4013% ( 187) 00:10:26.823 14.720 - 14.778: 48.8412% ( 282) 00:10:26.823 14.778 - 14.836: 53.4155% ( 375) 00:10:26.823 14.836 - 14.895: 59.6243% ( 509) 00:10:26.823 14.895 - 15.011: 70.9563% ( 929) 00:10:26.823 15.011 - 15.127: 77.2993% ( 520) 00:10:26.823 15.127 - 15.244: 79.8853% ( 212) 00:10:26.823 15.244 - 15.360: 81.9712% ( 171) 00:10:26.823 15.360 - 15.476: 84.3864% ( 198) 00:10:26.823 15.476 - 15.593: 85.6550% ( 104) 00:10:26.823 15.593 - 15.709: 86.7407% ( 89) 00:10:26.823 15.709 - 15.825: 87.4848% ( 61) 00:10:26.823 15.825 - 15.942: 88.0215% ( 44) 00:10:26.823 15.942 - 16.058: 88.6070% ( 48) 00:10:26.823 16.058 - 16.175: 89.0461% ( 36) 00:10:26.823 16.175 - 16.291: 89.3145% ( 22) 00:10:26.823 16.291 - 16.407: 89.5096% ( 16) 00:10:26.823 16.407 - 16.524: 89.6560% ( 12) 00:10:26.823 16.524 - 16.640: 89.8024% ( 12) 00:10:26.823 16.640 - 16.756: 89.8634% ( 5) 00:10:26.823 16.756 - 16.873: 89.9488% ( 7) 00:10:26.823 16.873 - 16.989: 89.9732% ( 2) 00:10:26.823 16.989 - 17.105: 90.0098% ( 3) 00:10:26.823 17.105 - 17.222: 90.0464% ( 3) 00:10:26.823 17.222 - 17.338: 90.0586% ( 1) 00:10:26.823 17.338 - 17.455: 90.0829% ( 2) 00:10:26.823 17.455 - 17.571: 90.0951% ( 1) 00:10:26.823 17.571 - 17.687: 90.1195% ( 2) 00:10:26.823 17.687 - 17.804: 90.1317% ( 1) 00:10:26.823 17.920 - 18.036: 90.1439% ( 1) 00:10:26.823 18.036 - 18.153: 90.1561% ( 1) 00:10:26.823 18.153 - 18.269: 90.1683% ( 1) 00:10:26.823 18.269 - 18.385: 90.1805% ( 1) 00:10:26.823 18.502 - 18.618: 90.2049% ( 2) 00:10:26.823 18.618 - 18.735: 90.2415% ( 3) 00:10:26.823 18.735 - 18.851: 90.2537% ( 1) 00:10:26.823 18.851 - 18.967: 90.2781% ( 2) 00:10:26.823 18.967 - 19.084: 90.3269% ( 4) 00:10:26.823 19.084 - 19.200: 90.3757% ( 4) 00:10:26.823 19.200 - 19.316: 90.4001% ( 2) 00:10:26.823 19.316 - 19.433: 90.4733% ( 6) 00:10:26.823 19.433 - 19.549: 90.4855% ( 1) 00:10:26.823 19.549 - 19.665: 90.5953% ( 9) 00:10:26.823 19.665 - 19.782: 90.6929% ( 8) 00:10:26.823 19.782 - 19.898: 90.7904% ( 8) 00:10:26.823 19.898 - 20.015: 90.9124% ( 10) 00:10:26.823 20.015 - 20.131: 90.9978% ( 7) 00:10:26.823 20.131 - 20.247: 91.1442% ( 12) 00:10:26.823 20.247 - 20.364: 91.2418% ( 8) 00:10:26.823 20.364 - 20.480: 91.4125% ( 14) 00:10:26.823 20.480 - 20.596: 91.5711% ( 13) 00:10:26.823 20.596 - 20.713: 91.7541% ( 15) 00:10:26.823 20.713 - 20.829: 91.9493% ( 16) 00:10:26.823 20.829 - 20.945: 92.0834% ( 11) 00:10:26.823 20.945 - 21.062: 92.2420% ( 13) 00:10:26.824 21.062 - 21.178: 92.4006% ( 13) 00:10:26.824 21.178 - 21.295: 92.4982% ( 8) 00:10:26.824 21.295 - 21.411: 92.6811% ( 15) 00:10:26.824 21.411 - 21.527: 92.7787% ( 8) 00:10:26.824 21.527 - 21.644: 92.8275% ( 4) 00:10:26.824 21.644 - 21.760: 92.9007% ( 6) 00:10:26.824 21.760 - 21.876: 92.9739% ( 6) 00:10:26.824 21.876 - 21.993: 93.0227% ( 4) 00:10:26.824 21.993 - 22.109: 93.0593% ( 3) 00:10:26.824 22.109 - 22.225: 93.1447% ( 7) 00:10:26.824 22.225 - 22.342: 93.1935% ( 4) 00:10:26.824 22.342 - 22.458: 93.2545% ( 5) 00:10:26.824 22.458 - 22.575: 93.3154% ( 5) 00:10:26.824 22.575 - 22.691: 93.3642% ( 4) 00:10:26.824 22.691 - 22.807: 93.4008% ( 3) 00:10:26.824 22.807 - 22.924: 93.4496% ( 4) 00:10:26.824 22.924 - 23.040: 93.4618% ( 1) 00:10:26.824 23.040 - 23.156: 93.5228% ( 5) 00:10:26.824 23.156 - 23.273: 93.5350% ( 1) 00:10:26.824 23.273 - 23.389: 93.5594% ( 2) 00:10:26.824 23.389 - 23.505: 93.6082% ( 4) 00:10:26.824 23.505 - 23.622: 93.6692% ( 5) 00:10:26.824 23.622 - 23.738: 93.6814% ( 1) 00:10:26.824 23.738 - 23.855: 93.7180% ( 3) 00:10:26.824 23.855 - 23.971: 93.7424% ( 2) 00:10:26.824 23.971 - 24.087: 93.7546% ( 1) 00:10:26.824 24.204 - 24.320: 93.7790% ( 2) 00:10:26.824 24.320 - 24.436: 93.8156% ( 3) 00:10:26.824 24.436 - 24.553: 93.8522% ( 3) 00:10:26.824 24.553 - 24.669: 93.8766% ( 2) 00:10:26.824 24.785 - 24.902: 93.9131% ( 3) 00:10:26.824 24.902 - 25.018: 93.9497% ( 3) 00:10:26.824 25.018 - 25.135: 93.9741% ( 2) 00:10:26.824 25.135 - 25.251: 94.0229% ( 4) 00:10:26.824 25.251 - 25.367: 94.0595% ( 3) 00:10:26.824 25.367 - 25.484: 94.0961% ( 3) 00:10:26.824 25.484 - 25.600: 94.1571% ( 5) 00:10:26.824 25.600 - 25.716: 94.2059% ( 4) 00:10:26.824 25.716 - 25.833: 94.2303% ( 2) 00:10:26.824 25.833 - 25.949: 94.3157% ( 7) 00:10:26.824 25.949 - 26.065: 94.3279% ( 1) 00:10:26.824 26.065 - 26.182: 94.4011% ( 6) 00:10:26.824 26.182 - 26.298: 94.4743% ( 6) 00:10:26.824 26.298 - 26.415: 94.4865% ( 1) 00:10:26.824 26.531 - 26.647: 94.4987% ( 1) 00:10:26.824 26.647 - 26.764: 94.5231% ( 2) 00:10:26.824 26.880 - 26.996: 94.5353% ( 1) 00:10:26.824 26.996 - 27.113: 94.5596% ( 2) 00:10:26.824 27.113 - 27.229: 94.5718% ( 1) 00:10:26.824 27.229 - 27.345: 94.6084% ( 3) 00:10:26.824 27.462 - 27.578: 94.6328% ( 2) 00:10:26.824 27.578 - 27.695: 94.6572% ( 2) 00:10:26.824 27.695 - 27.811: 94.6694% ( 1) 00:10:26.824 28.276 - 28.393: 94.6938% ( 2) 00:10:26.824 28.393 - 28.509: 94.7426% ( 4) 00:10:26.824 28.509 - 28.625: 94.8524% ( 9) 00:10:26.824 28.625 - 28.742: 94.9378% ( 7) 00:10:26.824 28.742 - 28.858: 95.0720% ( 11) 00:10:26.824 28.858 - 28.975: 95.1939% ( 10) 00:10:26.824 28.975 - 29.091: 95.5721% ( 31) 00:10:26.824 29.091 - 29.207: 95.8648% ( 24) 00:10:26.824 29.207 - 29.324: 96.1698% ( 25) 00:10:26.824 29.324 - 29.440: 96.6455% ( 39) 00:10:26.824 29.440 - 29.556: 97.1456% ( 41) 00:10:26.824 29.556 - 29.673: 97.5726% ( 35) 00:10:26.824 29.673 - 29.789: 97.8653% ( 24) 00:10:26.824 29.789 - 30.022: 98.1825% ( 26) 00:10:26.824 30.022 - 30.255: 98.4142% ( 19) 00:10:26.824 30.255 - 30.487: 98.4508% ( 3) 00:10:26.824 30.487 - 30.720: 98.5362% ( 7) 00:10:26.824 30.720 - 30.953: 98.6216% ( 7) 00:10:26.824 30.953 - 31.185: 98.7436% ( 10) 00:10:26.824 31.185 - 31.418: 98.7802% ( 3) 00:10:26.824 31.418 - 31.651: 98.8290% ( 4) 00:10:26.824 31.651 - 31.884: 98.9022% ( 6) 00:10:26.824 31.884 - 32.116: 98.9510% ( 4) 00:10:26.824 32.349 - 32.582: 98.9754% ( 2) 00:10:26.824 32.582 - 32.815: 98.9998% ( 2) 00:10:26.824 32.815 - 33.047: 99.0120% ( 1) 00:10:26.824 33.047 - 33.280: 99.0242% ( 1) 00:10:26.824 33.513 - 33.745: 99.0485% ( 2) 00:10:26.824 34.211 - 34.444: 99.0607% ( 1) 00:10:26.824 34.444 - 34.676: 99.1095% ( 4) 00:10:26.824 34.676 - 34.909: 99.1339% ( 2) 00:10:26.824 34.909 - 35.142: 99.1705% ( 3) 00:10:26.824 35.375 - 35.607: 99.2071% ( 3) 00:10:26.824 35.607 - 35.840: 99.2559% ( 4) 00:10:26.824 35.840 - 36.073: 99.3413% ( 7) 00:10:26.824 36.073 - 36.305: 99.3779% ( 3) 00:10:26.824 36.305 - 36.538: 99.3901% ( 1) 00:10:26.824 36.538 - 36.771: 99.4267% ( 3) 00:10:26.824 36.771 - 37.004: 99.4511% ( 2) 00:10:26.824 37.004 - 37.236: 99.4633% ( 1) 00:10:26.824 37.236 - 37.469: 99.4999% ( 3) 00:10:26.824 37.469 - 37.702: 99.5365% ( 3) 00:10:26.824 37.702 - 37.935: 99.5731% ( 3) 00:10:26.824 37.935 - 38.167: 99.5975% ( 2) 00:10:26.824 38.167 - 38.400: 99.6097% ( 1) 00:10:26.824 38.633 - 38.865: 99.6341% ( 2) 00:10:26.824 39.098 - 39.331: 99.6463% ( 1) 00:10:26.824 39.331 - 39.564: 99.6585% ( 1) 00:10:26.824 40.029 - 40.262: 99.6707% ( 1) 00:10:26.824 40.262 - 40.495: 99.6828% ( 1) 00:10:26.824 40.495 - 40.727: 99.7072% ( 2) 00:10:26.824 40.727 - 40.960: 99.7194% ( 1) 00:10:26.824 41.193 - 41.425: 99.7316% ( 1) 00:10:26.824 42.124 - 42.356: 99.7438% ( 1) 00:10:26.824 42.589 - 42.822: 99.7560% ( 1) 00:10:26.824 43.055 - 43.287: 99.7682% ( 1) 00:10:26.824 43.520 - 43.753: 99.7804% ( 1) 00:10:26.824 43.753 - 43.985: 99.8048% ( 2) 00:10:26.824 43.985 - 44.218: 99.8170% ( 1) 00:10:26.824 45.149 - 45.382: 99.8292% ( 1) 00:10:26.824 45.615 - 45.847: 99.8414% ( 1) 00:10:26.824 45.847 - 46.080: 99.8536% ( 1) 00:10:26.824 46.080 - 46.313: 99.8658% ( 1) 00:10:26.824 46.313 - 46.545: 99.8780% ( 1) 00:10:26.824 48.873 - 49.105: 99.8902% ( 1) 00:10:26.824 49.338 - 49.571: 99.9024% ( 1) 00:10:26.824 49.804 - 50.036: 99.9146% ( 1) 00:10:26.824 52.596 - 52.829: 99.9268% ( 1) 00:10:26.824 53.527 - 53.760: 99.9512% ( 2) 00:10:26.824 73.542 - 74.007: 99.9634% ( 1) 00:10:26.824 75.404 - 75.869: 99.9756% ( 1) 00:10:26.824 81.455 - 81.920: 99.9878% ( 1) 00:10:26.824 106.589 - 107.055: 100.0000% ( 1) 00:10:26.824 00:10:26.824 Complete histogram 00:10:26.824 ================== 00:10:26.824 Range in us Cumulative Count 00:10:26.824 8.902 - 8.960: 0.0122% ( 1) 00:10:26.824 9.076 - 9.135: 0.0366% ( 2) 00:10:26.824 9.135 - 9.193: 0.1952% ( 13) 00:10:26.824 9.193 - 9.251: 0.7807% ( 48) 00:10:26.824 9.251 - 9.309: 1.6467% ( 71) 00:10:26.824 9.309 - 9.367: 2.6348% ( 81) 00:10:26.824 9.367 - 9.425: 3.5374% ( 74) 00:10:26.824 9.425 - 9.484: 5.1964% ( 136) 00:10:26.824 9.484 - 9.542: 9.2096% ( 329) 00:10:26.824 9.542 - 9.600: 14.9427% ( 470) 00:10:26.824 9.600 - 9.658: 20.5660% ( 461) 00:10:26.824 9.658 - 9.716: 23.8473% ( 269) 00:10:26.824 9.716 - 9.775: 27.2262% ( 277) 00:10:26.824 9.775 - 9.833: 34.4474% ( 592) 00:10:26.824 9.833 - 9.891: 45.0598% ( 870) 00:10:26.824 9.891 - 9.949: 55.9039% ( 889) 00:10:26.824 9.949 - 10.007: 63.1739% ( 596) 00:10:26.824 10.007 - 10.065: 67.3335% ( 341) 00:10:26.824 10.065 - 10.124: 70.0293% ( 221) 00:10:26.824 10.124 - 10.182: 72.3103% ( 187) 00:10:26.824 10.182 - 10.240: 74.2010% ( 155) 00:10:26.824 10.240 - 10.298: 75.7258% ( 125) 00:10:26.824 10.298 - 10.356: 76.7382% ( 83) 00:10:26.824 10.356 - 10.415: 77.4457% ( 58) 00:10:26.824 10.415 - 10.473: 78.3118% ( 71) 00:10:26.824 10.473 - 10.531: 79.0437% ( 60) 00:10:26.824 10.531 - 10.589: 79.7268% ( 56) 00:10:26.824 10.589 - 10.647: 80.5074% ( 64) 00:10:26.824 10.647 - 10.705: 81.1661% ( 54) 00:10:26.824 10.705 - 10.764: 82.0200% ( 70) 00:10:26.824 10.764 - 10.822: 82.7031% ( 56) 00:10:26.824 10.822 - 10.880: 83.5692% ( 71) 00:10:26.824 10.880 - 10.938: 84.2645% ( 57) 00:10:26.824 10.938 - 10.996: 84.9841% ( 59) 00:10:26.824 10.996 - 11.055: 85.4477% ( 38) 00:10:26.824 11.055 - 11.113: 85.6672% ( 18) 00:10:26.824 11.113 - 11.171: 85.8502% ( 15) 00:10:26.824 11.171 - 11.229: 85.8868% ( 3) 00:10:26.824 11.229 - 11.287: 86.0454% ( 13) 00:10:26.824 11.287 - 11.345: 86.1918% ( 12) 00:10:26.824 11.345 - 11.404: 86.2771% ( 7) 00:10:26.824 11.404 - 11.462: 86.3137% ( 3) 00:10:26.824 11.462 - 11.520: 86.4113% ( 8) 00:10:26.824 11.520 - 11.578: 86.4601% ( 4) 00:10:26.824 11.578 - 11.636: 86.5455% ( 7) 00:10:26.824 11.636 - 11.695: 86.5943% ( 4) 00:10:26.824 11.695 - 11.753: 86.6431% ( 4) 00:10:26.824 11.753 - 11.811: 86.7163% ( 6) 00:10:26.824 11.811 - 11.869: 86.8017% ( 7) 00:10:26.824 11.869 - 11.927: 86.8748% ( 6) 00:10:26.824 11.927 - 11.985: 86.9602% ( 7) 00:10:26.824 11.985 - 12.044: 87.0212% ( 5) 00:10:26.824 12.044 - 12.102: 87.0700% ( 4) 00:10:26.824 12.102 - 12.160: 87.1066% ( 3) 00:10:26.824 12.160 - 12.218: 87.1798% ( 6) 00:10:26.824 12.218 - 12.276: 87.3018% ( 10) 00:10:26.824 12.276 - 12.335: 87.3872% ( 7) 00:10:26.824 12.335 - 12.393: 87.4238% ( 3) 00:10:26.824 12.393 - 12.451: 87.4848% ( 5) 00:10:26.824 12.451 - 12.509: 87.5945% ( 9) 00:10:26.824 12.509 - 12.567: 87.6799% ( 7) 00:10:26.824 12.567 - 12.625: 87.7531% ( 6) 00:10:26.824 12.625 - 12.684: 87.8019% ( 4) 00:10:26.824 12.684 - 12.742: 87.8385% ( 3) 00:10:26.824 12.742 - 12.800: 87.8873% ( 4) 00:10:26.824 12.800 - 12.858: 87.9483% ( 5) 00:10:26.824 12.858 - 12.916: 88.0337% ( 7) 00:10:26.824 12.916 - 12.975: 88.0703% ( 3) 00:10:26.824 12.975 - 13.033: 88.1313% ( 5) 00:10:26.824 13.033 - 13.091: 88.1678% ( 3) 00:10:26.824 13.091 - 13.149: 88.2166% ( 4) 00:10:26.825 13.149 - 13.207: 88.2288% ( 1) 00:10:26.825 13.207 - 13.265: 88.2532% ( 2) 00:10:26.825 13.382 - 13.440: 88.2898% ( 3) 00:10:26.825 13.440 - 13.498: 88.3020% ( 1) 00:10:26.825 13.498 - 13.556: 88.3386% ( 3) 00:10:26.825 13.556 - 13.615: 88.3508% ( 1) 00:10:26.825 13.731 - 13.789: 88.3874% ( 3) 00:10:26.825 13.789 - 13.847: 88.3996% ( 1) 00:10:26.825 13.964 - 14.022: 88.4240% ( 2) 00:10:26.825 14.138 - 14.196: 88.4484% ( 2) 00:10:26.825 14.196 - 14.255: 88.4728% ( 2) 00:10:26.825 14.313 - 14.371: 88.4850% ( 1) 00:10:26.825 14.371 - 14.429: 88.5094% ( 2) 00:10:26.825 14.487 - 14.545: 88.5338% ( 2) 00:10:26.825 14.545 - 14.604: 88.5460% ( 1) 00:10:26.825 14.720 - 14.778: 88.5582% ( 1) 00:10:26.825 14.778 - 14.836: 88.5826% ( 2) 00:10:26.825 14.895 - 15.011: 88.6192% ( 3) 00:10:26.825 15.011 - 15.127: 88.6558% ( 3) 00:10:26.825 15.127 - 15.244: 88.6802% ( 2) 00:10:26.825 15.244 - 15.360: 88.7168% ( 3) 00:10:26.825 15.360 - 15.476: 88.7534% ( 3) 00:10:26.825 15.476 - 15.593: 88.8631% ( 9) 00:10:26.825 15.593 - 15.709: 88.9119% ( 4) 00:10:26.825 15.709 - 15.825: 89.0095% ( 8) 00:10:26.825 15.825 - 15.942: 89.2169% ( 17) 00:10:26.825 15.942 - 16.058: 89.3145% ( 8) 00:10:26.825 16.058 - 16.175: 89.4730% ( 13) 00:10:26.825 16.175 - 16.291: 89.5828% ( 9) 00:10:26.825 16.291 - 16.407: 89.7414% ( 13) 00:10:26.825 16.407 - 16.524: 89.8146% ( 6) 00:10:26.825 16.524 - 16.640: 89.9000% ( 7) 00:10:26.825 16.640 - 16.756: 90.0464% ( 12) 00:10:26.825 16.756 - 16.873: 90.1317% ( 7) 00:10:26.825 16.873 - 16.989: 90.2293% ( 8) 00:10:26.825 16.989 - 17.105: 90.3025% ( 6) 00:10:26.825 17.105 - 17.222: 90.3635% ( 5) 00:10:26.825 17.222 - 17.338: 90.4367% ( 6) 00:10:26.825 17.338 - 17.455: 90.4733% ( 3) 00:10:26.825 17.455 - 17.571: 90.5709% ( 8) 00:10:26.825 17.571 - 17.687: 90.6807% ( 9) 00:10:26.825 17.687 - 17.804: 90.7416% ( 5) 00:10:26.825 17.804 - 17.920: 90.8270% ( 7) 00:10:26.825 17.920 - 18.036: 90.9246% ( 8) 00:10:26.825 18.036 - 18.153: 90.9734% ( 4) 00:10:26.825 18.153 - 18.269: 91.0222% ( 4) 00:10:26.825 18.269 - 18.385: 91.0954% ( 6) 00:10:26.825 18.385 - 18.502: 91.1442% ( 4) 00:10:26.825 18.502 - 18.618: 91.2296% ( 7) 00:10:26.825 18.618 - 18.735: 91.2784% ( 4) 00:10:26.825 18.735 - 18.851: 91.2906% ( 1) 00:10:26.825 18.851 - 18.967: 91.3028% ( 1) 00:10:26.825 18.967 - 19.084: 91.3150% ( 1) 00:10:26.825 19.316 - 19.433: 91.3272% ( 1) 00:10:26.825 19.433 - 19.549: 91.3394% ( 1) 00:10:26.825 19.665 - 19.782: 91.3515% ( 1) 00:10:26.825 19.782 - 19.898: 91.3637% ( 1) 00:10:26.825 19.898 - 20.015: 91.3759% ( 1) 00:10:26.825 20.015 - 20.131: 91.4003% ( 2) 00:10:26.825 20.480 - 20.596: 91.4125% ( 1) 00:10:26.825 20.596 - 20.713: 91.4247% ( 1) 00:10:26.825 20.713 - 20.829: 91.4491% ( 2) 00:10:26.825 20.829 - 20.945: 91.4613% ( 1) 00:10:26.825 20.945 - 21.062: 91.4857% ( 2) 00:10:26.825 21.062 - 21.178: 91.5345% ( 4) 00:10:26.825 21.295 - 21.411: 91.5589% ( 2) 00:10:26.825 21.527 - 21.644: 91.5833% ( 2) 00:10:26.825 21.644 - 21.760: 91.6077% ( 2) 00:10:26.825 21.876 - 21.993: 91.6199% ( 1) 00:10:26.825 22.109 - 22.225: 91.6321% ( 1) 00:10:26.825 22.225 - 22.342: 91.6443% ( 1) 00:10:26.825 22.807 - 22.924: 91.6565% ( 1) 00:10:26.825 23.505 - 23.622: 91.6687% ( 1) 00:10:26.825 23.622 - 23.738: 91.7053% ( 3) 00:10:26.825 23.738 - 23.855: 91.7907% ( 7) 00:10:26.825 23.855 - 23.971: 92.0102% ( 18) 00:10:26.825 23.971 - 24.087: 92.4250% ( 34) 00:10:26.825 24.087 - 24.204: 92.9983% ( 47) 00:10:26.825 24.204 - 24.320: 93.7424% ( 61) 00:10:26.825 24.320 - 24.436: 94.9134% ( 96) 00:10:26.825 24.436 - 24.553: 95.8161% ( 74) 00:10:26.825 24.553 - 24.669: 96.5967% ( 64) 00:10:26.825 24.669 - 24.785: 96.9871% ( 32) 00:10:26.825 24.785 - 24.902: 97.4506% ( 38) 00:10:26.825 24.902 - 25.018: 97.6824% ( 19) 00:10:26.825 25.018 - 25.135: 97.9995% ( 26) 00:10:26.825 25.135 - 25.251: 98.1703% ( 14) 00:10:26.825 25.251 - 25.367: 98.3167% ( 12) 00:10:26.825 25.367 - 25.484: 98.4142% ( 8) 00:10:26.825 25.484 - 25.600: 98.4630% ( 4) 00:10:26.825 25.600 - 25.716: 98.5484% ( 7) 00:10:26.825 25.716 - 25.833: 98.6216% ( 6) 00:10:26.825 25.833 - 25.949: 98.6582% ( 3) 00:10:26.825 25.949 - 26.065: 98.6704% ( 1) 00:10:26.825 26.065 - 26.182: 98.7192% ( 4) 00:10:26.825 26.182 - 26.298: 98.7436% ( 2) 00:10:26.825 26.298 - 26.415: 98.7802% ( 3) 00:10:26.825 26.415 - 26.531: 98.8168% ( 3) 00:10:26.825 26.531 - 26.647: 98.8778% ( 5) 00:10:26.825 26.647 - 26.764: 98.9510% ( 6) 00:10:26.825 26.764 - 26.880: 98.9632% ( 1) 00:10:26.825 26.880 - 26.996: 98.9876% ( 2) 00:10:26.825 26.996 - 27.113: 99.0120% ( 2) 00:10:26.825 27.113 - 27.229: 99.0364% ( 2) 00:10:26.825 27.345 - 27.462: 99.0485% ( 1) 00:10:26.825 27.578 - 27.695: 99.0607% ( 1) 00:10:26.825 27.695 - 27.811: 99.1095% ( 4) 00:10:26.825 28.276 - 28.393: 99.1339% ( 2) 00:10:26.825 28.975 - 29.091: 99.1461% ( 1) 00:10:26.825 29.324 - 29.440: 99.1583% ( 1) 00:10:26.825 29.673 - 29.789: 99.1705% ( 1) 00:10:26.825 29.789 - 30.022: 99.1949% ( 2) 00:10:26.825 30.022 - 30.255: 99.2559% ( 5) 00:10:26.825 30.255 - 30.487: 99.3779% ( 10) 00:10:26.825 30.487 - 30.720: 99.4755% ( 8) 00:10:26.825 30.720 - 30.953: 99.5487% ( 6) 00:10:26.825 30.953 - 31.185: 99.5853% ( 3) 00:10:26.825 31.185 - 31.418: 99.6219% ( 3) 00:10:26.825 31.418 - 31.651: 99.6463% ( 2) 00:10:26.825 31.651 - 31.884: 99.6585% ( 1) 00:10:26.825 31.884 - 32.116: 99.7194% ( 5) 00:10:26.825 32.349 - 32.582: 99.7560% ( 3) 00:10:26.825 32.582 - 32.815: 99.7804% ( 2) 00:10:26.825 33.745 - 33.978: 99.8048% ( 2) 00:10:26.825 34.211 - 34.444: 99.8170% ( 1) 00:10:26.825 34.444 - 34.676: 99.8292% ( 1) 00:10:26.825 34.909 - 35.142: 99.8414% ( 1) 00:10:26.825 37.469 - 37.702: 99.8536% ( 1) 00:10:26.825 38.865 - 39.098: 99.8658% ( 1) 00:10:26.825 40.029 - 40.262: 99.8902% ( 2) 00:10:26.825 41.425 - 41.658: 99.9024% ( 1) 00:10:26.825 41.891 - 42.124: 99.9146% ( 1) 00:10:26.825 42.822 - 43.055: 99.9268% ( 1) 00:10:26.825 45.149 - 45.382: 99.9390% ( 1) 00:10:26.825 45.847 - 46.080: 99.9512% ( 1) 00:10:26.825 46.778 - 47.011: 99.9634% ( 1) 00:10:26.825 47.709 - 47.942: 99.9878% ( 2) 00:10:26.825 50.735 - 50.967: 100.0000% ( 1) 00:10:26.825 00:10:26.825 00:10:26.825 real 0m1.297s 00:10:26.825 user 0m1.109s 00:10:26.825 sys 0m0.141s 00:10:26.825 21:01:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.825 21:01:40 -- common/autotest_common.sh@10 -- # set +x 00:10:26.825 ************************************ 00:10:26.825 END TEST nvme_overhead 00:10:26.825 ************************************ 00:10:26.825 21:01:40 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:26.825 21:01:40 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:26.825 21:01:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:26.825 21:01:40 -- common/autotest_common.sh@10 -- # set +x 00:10:26.825 ************************************ 00:10:26.825 START TEST nvme_arbitration 00:10:26.825 ************************************ 00:10:26.825 21:01:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:30.124 Initializing NVMe Controllers 00:10:30.124 Attached to 0000:00:06.0 00:10:30.124 Attached to 0000:00:07.0 00:10:30.124 Attached to 0000:00:09.0 00:10:30.124 Attached to 0000:00:08.0 00:10:30.124 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:30.124 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:30.124 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:30.124 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:30.124 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:30.124 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:30.124 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:30.124 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:30.124 Initialization complete. Launching workers. 00:10:30.124 Starting thread on core 1 with urgent priority queue 00:10:30.124 Starting thread on core 2 with urgent priority queue 00:10:30.124 Starting thread on core 3 with urgent priority queue 00:10:30.124 Starting thread on core 0 with urgent priority queue 00:10:30.124 QEMU NVMe Ctrl (12340 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:10:30.124 QEMU NVMe Ctrl (12342 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:10:30.124 QEMU NVMe Ctrl (12341 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:10:30.124 QEMU NVMe Ctrl (12342 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:10:30.124 QEMU NVMe Ctrl (12343 ) core 2: 618.67 IO/s 161.64 secs/100000 ios 00:10:30.124 QEMU NVMe Ctrl (12342 ) core 3: 789.33 IO/s 126.69 secs/100000 ios 00:10:30.124 ======================================================== 00:10:30.125 00:10:30.125 ************************************ 00:10:30.125 END TEST nvme_arbitration 00:10:30.125 ************************************ 00:10:30.125 00:10:30.125 real 0m3.527s 00:10:30.125 user 0m9.766s 00:10:30.125 sys 0m0.137s 00:10:30.125 21:01:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.125 21:01:43 -- common/autotest_common.sh@10 -- # set +x 00:10:30.125 21:01:44 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:10:30.125 21:01:44 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:30.125 21:01:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.125 21:01:44 -- common/autotest_common.sh@10 -- # set +x 00:10:30.125 ************************************ 00:10:30.125 START TEST nvme_single_aen 00:10:30.125 ************************************ 00:10:30.125 21:01:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:10:30.383 [2024-07-13 21:01:44.086555] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:30.383 [2024-07-13 21:01:44.086858] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.383 [2024-07-13 21:01:44.268783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:30.383 [2024-07-13 21:01:44.270558] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:30.383 [2024-07-13 21:01:44.272006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:30.383 [2024-07-13 21:01:44.273317] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:30.383 Asynchronous Event Request test 00:10:30.383 Attached to 0000:00:06.0 00:10:30.383 Attached to 0000:00:07.0 00:10:30.383 Attached to 0000:00:09.0 00:10:30.383 Attached to 0000:00:08.0 00:10:30.383 Reset controller to setup AER completions for this process 00:10:30.383 Registering asynchronous event callbacks... 00:10:30.383 Getting orig temperature thresholds of all controllers 00:10:30.383 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:30.383 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:30.383 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:30.383 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:30.383 Setting all controllers temperature threshold low to trigger AER 00:10:30.383 Waiting for all controllers temperature threshold to be set lower 00:10:30.383 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:30.383 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:30.383 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:30.383 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:30.383 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:30.383 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:30.383 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:30.383 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:30.383 Waiting for all controllers to trigger AER and reset threshold 00:10:30.383 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:30.383 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:30.383 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:30.383 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:30.383 Cleaning up... 00:10:30.642 00:10:30.642 real 0m0.275s 00:10:30.642 user 0m0.104s 00:10:30.642 sys 0m0.128s 00:10:30.642 21:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.642 21:01:44 -- common/autotest_common.sh@10 -- # set +x 00:10:30.642 ************************************ 00:10:30.642 END TEST nvme_single_aen 00:10:30.642 ************************************ 00:10:30.642 21:01:44 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:30.642 21:01:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:30.642 21:01:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.642 21:01:44 -- common/autotest_common.sh@10 -- # set +x 00:10:30.642 ************************************ 00:10:30.642 START TEST nvme_doorbell_aers 00:10:30.642 ************************************ 00:10:30.642 21:01:44 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:10:30.642 21:01:44 -- nvme/nvme.sh@70 -- # bdfs=() 00:10:30.642 21:01:44 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:30.642 21:01:44 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:30.642 21:01:44 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:30.642 21:01:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:30.642 21:01:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:30.642 21:01:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:30.642 21:01:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:30.642 21:01:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:30.642 21:01:44 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:30.642 21:01:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:30.642 21:01:44 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:30.642 21:01:44 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:30.900 [2024-07-13 21:01:44.651321] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:40.891 Executing: test_write_invalid_db 00:10:40.891 Waiting for AER completion... 00:10:40.891 Failure: test_write_invalid_db 00:10:40.891 00:10:40.891 Executing: test_invalid_db_write_overflow_sq 00:10:40.891 Waiting for AER completion... 00:10:40.891 Failure: test_invalid_db_write_overflow_sq 00:10:40.891 00:10:40.891 Executing: test_invalid_db_write_overflow_cq 00:10:40.891 Waiting for AER completion... 00:10:40.891 Failure: test_invalid_db_write_overflow_cq 00:10:40.891 00:10:40.891 21:01:54 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:40.891 21:01:54 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:40.891 [2024-07-13 21:01:54.734444] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:50.861 Executing: test_write_invalid_db 00:10:50.862 Waiting for AER completion... 00:10:50.862 Failure: test_write_invalid_db 00:10:50.862 00:10:50.862 Executing: test_invalid_db_write_overflow_sq 00:10:50.862 Waiting for AER completion... 00:10:50.862 Failure: test_invalid_db_write_overflow_sq 00:10:50.862 00:10:50.862 Executing: test_invalid_db_write_overflow_cq 00:10:50.862 Waiting for AER completion... 00:10:50.862 Failure: test_invalid_db_write_overflow_cq 00:10:50.862 00:10:50.862 21:02:04 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:50.862 21:02:04 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:51.120 [2024-07-13 21:02:04.796958] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:01.108 Executing: test_write_invalid_db 00:11:01.108 Waiting for AER completion... 00:11:01.108 Failure: test_write_invalid_db 00:11:01.108 00:11:01.108 Executing: test_invalid_db_write_overflow_sq 00:11:01.108 Waiting for AER completion... 00:11:01.108 Failure: test_invalid_db_write_overflow_sq 00:11:01.108 00:11:01.108 Executing: test_invalid_db_write_overflow_cq 00:11:01.108 Waiting for AER completion... 00:11:01.108 Failure: test_invalid_db_write_overflow_cq 00:11:01.108 00:11:01.108 21:02:14 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:01.108 21:02:14 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:01.108 [2024-07-13 21:02:14.846546] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 Executing: test_write_invalid_db 00:11:11.079 Waiting for AER completion... 00:11:11.079 Failure: test_write_invalid_db 00:11:11.079 00:11:11.079 Executing: test_invalid_db_write_overflow_sq 00:11:11.079 Waiting for AER completion... 00:11:11.079 Failure: test_invalid_db_write_overflow_sq 00:11:11.079 00:11:11.079 Executing: test_invalid_db_write_overflow_cq 00:11:11.079 Waiting for AER completion... 00:11:11.079 Failure: test_invalid_db_write_overflow_cq 00:11:11.079 00:11:11.079 00:11:11.079 real 0m40.246s 00:11:11.079 user 0m34.114s 00:11:11.079 sys 0m5.782s 00:11:11.079 21:02:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.079 ************************************ 00:11:11.079 21:02:24 -- common/autotest_common.sh@10 -- # set +x 00:11:11.079 END TEST nvme_doorbell_aers 00:11:11.079 ************************************ 00:11:11.079 21:02:24 -- nvme/nvme.sh@97 -- # uname 00:11:11.079 21:02:24 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:11.079 21:02:24 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:11:11.079 21:02:24 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:11.079 21:02:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:11.079 21:02:24 -- common/autotest_common.sh@10 -- # set +x 00:11:11.079 ************************************ 00:11:11.079 START TEST nvme_multi_aen 00:11:11.079 ************************************ 00:11:11.079 21:02:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:11:11.079 [2024-07-13 21:02:24.726161] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:11.079 [2024-07-13 21:02:24.726501] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.079 [2024-07-13 21:02:24.914692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:11.079 [2024-07-13 21:02:24.915013] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 [2024-07-13 21:02:24.915184] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 [2024-07-13 21:02:24.915216] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 [2024-07-13 21:02:24.917335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:11:11.079 [2024-07-13 21:02:24.917496] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 [2024-07-13 21:02:24.917706] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 [2024-07-13 21:02:24.917857] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 [2024-07-13 21:02:24.919333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:11:11.079 [2024-07-13 21:02:24.919509] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 [2024-07-13 21:02:24.919554] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 [2024-07-13 21:02:24.919577] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 [2024-07-13 21:02:24.920909] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:11:11.079 [2024-07-13 21:02:24.921064] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 [2024-07-13 21:02:24.921194] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 [2024-07-13 21:02:24.921225] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:11:11.079 Child process pid: 64933 00:11:11.079 [2024-07-13 21:02:24.931356] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:11.079 [2024-07-13 21:02:24.931570] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.338 [Child] Asynchronous Event Request test 00:11:11.338 [Child] Attached to 0000:00:06.0 00:11:11.338 [Child] Attached to 0000:00:07.0 00:11:11.338 [Child] Attached to 0000:00:09.0 00:11:11.338 [Child] Attached to 0000:00:08.0 00:11:11.338 [Child] Registering asynchronous event callbacks... 00:11:11.338 [Child] Getting orig temperature thresholds of all controllers 00:11:11.338 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.338 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.338 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.338 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.338 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:11.338 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.338 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.338 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.338 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.338 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.338 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.338 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.338 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.338 [Child] Cleaning up... 00:11:11.338 Asynchronous Event Request test 00:11:11.338 Attached to 0000:00:06.0 00:11:11.338 Attached to 0000:00:07.0 00:11:11.338 Attached to 0000:00:09.0 00:11:11.339 Attached to 0000:00:08.0 00:11:11.339 Reset controller to setup AER completions for this process 00:11:11.339 Registering asynchronous event callbacks... 00:11:11.339 Getting orig temperature thresholds of all controllers 00:11:11.339 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.339 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.339 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.339 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.339 Setting all controllers temperature threshold low to trigger AER 00:11:11.339 Waiting for all controllers temperature threshold to be set lower 00:11:11.339 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.339 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:11:11.339 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.339 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:11:11.339 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.339 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:11:11.339 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.339 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:11:11.339 Waiting for all controllers to trigger AER and reset threshold 00:11:11.339 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.339 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.339 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.339 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.339 Cleaning up... 00:11:11.339 00:11:11.339 real 0m0.564s 00:11:11.339 user 0m0.210s 00:11:11.339 sys 0m0.249s 00:11:11.339 21:02:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.339 21:02:25 -- common/autotest_common.sh@10 -- # set +x 00:11:11.339 ************************************ 00:11:11.339 END TEST nvme_multi_aen 00:11:11.339 ************************************ 00:11:11.597 21:02:25 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:11.597 21:02:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:11.597 21:02:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:11.597 21:02:25 -- common/autotest_common.sh@10 -- # set +x 00:11:11.597 ************************************ 00:11:11.597 START TEST nvme_startup 00:11:11.597 ************************************ 00:11:11.598 21:02:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:11.858 Initializing NVMe Controllers 00:11:11.858 Attached to 0000:00:06.0 00:11:11.858 Attached to 0000:00:07.0 00:11:11.858 Attached to 0000:00:09.0 00:11:11.858 Attached to 0000:00:08.0 00:11:11.858 Initialization complete. 00:11:11.858 Time used:194763.938 (us). 00:11:11.858 00:11:11.858 real 0m0.278s 00:11:11.858 user 0m0.094s 00:11:11.858 sys 0m0.141s 00:11:11.858 ************************************ 00:11:11.858 END TEST nvme_startup 00:11:11.858 ************************************ 00:11:11.858 21:02:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.858 21:02:25 -- common/autotest_common.sh@10 -- # set +x 00:11:11.858 21:02:25 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:11.858 21:02:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:11.858 21:02:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:11.858 21:02:25 -- common/autotest_common.sh@10 -- # set +x 00:11:11.858 ************************************ 00:11:11.858 START TEST nvme_multi_secondary 00:11:11.858 ************************************ 00:11:11.858 21:02:25 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:11:11.858 21:02:25 -- nvme/nvme.sh@52 -- # pid0=64984 00:11:11.858 21:02:25 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:11.858 21:02:25 -- nvme/nvme.sh@54 -- # pid1=64985 00:11:11.858 21:02:25 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:11.858 21:02:25 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:16.045 Initializing NVMe Controllers 00:11:16.045 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:16.045 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:16.045 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:16.045 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:16.045 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:11:16.045 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:11:16.045 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:11:16.045 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:11:16.045 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:11:16.045 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:11:16.045 Initialization complete. Launching workers. 00:11:16.045 ======================================================== 00:11:16.045 Latency(us) 00:11:16.045 Device Information : IOPS MiB/s Average min max 00:11:16.045 PCIE (0000:00:06.0) NSID 1 from core 2: 2383.67 9.31 6709.65 2120.95 13588.19 00:11:16.045 PCIE (0000:00:07.0) NSID 1 from core 2: 2383.67 9.31 6713.81 2145.47 13051.01 00:11:16.045 PCIE (0000:00:09.0) NSID 1 from core 2: 2383.67 9.31 6713.00 2018.72 13861.11 00:11:16.045 PCIE (0000:00:08.0) NSID 1 from core 2: 2383.67 9.31 6714.63 2038.12 13341.08 00:11:16.045 PCIE (0000:00:08.0) NSID 2 from core 2: 2383.67 9.31 6715.26 1716.00 12939.63 00:11:16.045 PCIE (0000:00:08.0) NSID 3 from core 2: 2383.67 9.31 6717.80 2086.98 12916.22 00:11:16.045 ======================================================== 00:11:16.045 Total : 14302.02 55.87 6714.02 1716.00 13861.11 00:11:16.045 00:11:16.045 21:02:29 -- nvme/nvme.sh@56 -- # wait 64984 00:11:16.045 Initializing NVMe Controllers 00:11:16.045 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:16.045 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:16.045 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:16.045 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:16.045 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:11:16.045 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:11:16.045 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:11:16.045 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:11:16.045 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:11:16.045 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:11:16.045 Initialization complete. Launching workers. 00:11:16.045 ======================================================== 00:11:16.045 Latency(us) 00:11:16.045 Device Information : IOPS MiB/s Average min max 00:11:16.045 PCIE (0000:00:06.0) NSID 1 from core 1: 5137.32 20.07 3123.51 1521.71 14929.73 00:11:16.045 PCIE (0000:00:07.0) NSID 1 from core 1: 5137.32 20.07 3125.48 1593.16 14756.75 00:11:16.045 PCIE (0000:00:09.0) NSID 1 from core 1: 5137.32 20.07 3125.90 1478.99 14686.91 00:11:16.045 PCIE (0000:00:08.0) NSID 1 from core 1: 5137.32 20.07 3125.86 1561.78 14149.89 00:11:16.045 PCIE (0000:00:08.0) NSID 2 from core 1: 5137.32 20.07 3125.93 1670.86 13902.60 00:11:16.045 PCIE (0000:00:08.0) NSID 3 from core 1: 5137.32 20.07 3126.05 1530.15 13758.11 00:11:16.045 ======================================================== 00:11:16.045 Total : 30823.93 120.41 3125.46 1478.99 14929.73 00:11:16.045 00:11:17.421 Initializing NVMe Controllers 00:11:17.421 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:17.421 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:17.421 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:17.421 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:17.421 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:17.421 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:17.421 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:17.421 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:17.421 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:17.421 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:17.421 Initialization complete. Launching workers. 00:11:17.421 ======================================================== 00:11:17.421 Latency(us) 00:11:17.421 Device Information : IOPS MiB/s Average min max 00:11:17.421 PCIE (0000:00:06.0) NSID 1 from core 0: 7755.46 30.29 2061.53 1026.48 6368.66 00:11:17.421 PCIE (0000:00:07.0) NSID 1 from core 0: 7755.46 30.29 2062.61 1054.02 5928.92 00:11:17.421 PCIE (0000:00:09.0) NSID 1 from core 0: 7755.46 30.29 2062.59 1013.57 6236.08 00:11:17.421 PCIE (0000:00:08.0) NSID 1 from core 0: 7755.46 30.29 2062.57 990.75 6132.03 00:11:17.421 PCIE (0000:00:08.0) NSID 2 from core 0: 7755.46 30.29 2062.53 914.55 6139.49 00:11:17.421 PCIE (0000:00:08.0) NSID 3 from core 0: 7755.46 30.29 2062.50 787.36 6156.23 00:11:17.421 ======================================================== 00:11:17.421 Total : 46532.79 181.77 2062.39 787.36 6368.66 00:11:17.421 00:11:17.421 21:02:30 -- nvme/nvme.sh@57 -- # wait 64985 00:11:17.421 21:02:30 -- nvme/nvme.sh@61 -- # pid0=65055 00:11:17.421 21:02:30 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:17.421 21:02:30 -- nvme/nvme.sh@63 -- # pid1=65056 00:11:17.421 21:02:30 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:17.421 21:02:30 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:20.728 Initializing NVMe Controllers 00:11:20.728 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:20.728 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:20.728 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:20.728 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:20.728 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:20.728 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:20.728 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:20.728 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:20.728 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:20.728 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:20.728 Initialization complete. Launching workers. 00:11:20.728 ======================================================== 00:11:20.728 Latency(us) 00:11:20.728 Device Information : IOPS MiB/s Average min max 00:11:20.728 PCIE (0000:00:06.0) NSID 1 from core 0: 5221.74 20.40 3062.35 990.62 6604.49 00:11:20.728 PCIE (0000:00:07.0) NSID 1 from core 0: 5221.74 20.40 3063.55 1037.45 6222.85 00:11:20.728 PCIE (0000:00:09.0) NSID 1 from core 0: 5221.74 20.40 3063.65 1019.68 6703.54 00:11:20.728 PCIE (0000:00:08.0) NSID 1 from core 0: 5221.74 20.40 3063.64 1032.53 6789.60 00:11:20.728 PCIE (0000:00:08.0) NSID 2 from core 0: 5221.74 20.40 3063.67 1045.50 6841.46 00:11:20.728 PCIE (0000:00:08.0) NSID 3 from core 0: 5221.74 20.40 3063.91 1026.15 6559.34 00:11:20.728 ======================================================== 00:11:20.728 Total : 31330.46 122.38 3063.46 990.62 6841.46 00:11:20.728 00:11:20.728 Initializing NVMe Controllers 00:11:20.728 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:20.728 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:20.728 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:20.728 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:20.728 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:11:20.728 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:11:20.728 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:11:20.728 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:11:20.728 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:11:20.728 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:11:20.728 Initialization complete. Launching workers. 00:11:20.728 ======================================================== 00:11:20.728 Latency(us) 00:11:20.728 Device Information : IOPS MiB/s Average min max 00:11:20.728 PCIE (0000:00:06.0) NSID 1 from core 1: 5118.97 20.00 3123.72 1057.68 7678.16 00:11:20.728 PCIE (0000:00:07.0) NSID 1 from core 1: 5118.97 20.00 3124.93 1087.12 7601.54 00:11:20.728 PCIE (0000:00:09.0) NSID 1 from core 1: 5118.97 20.00 3124.78 1035.23 7219.88 00:11:20.728 PCIE (0000:00:08.0) NSID 1 from core 1: 5118.97 20.00 3124.63 953.01 7191.47 00:11:20.728 PCIE (0000:00:08.0) NSID 2 from core 1: 5118.97 20.00 3124.49 941.60 7752.23 00:11:20.728 PCIE (0000:00:08.0) NSID 3 from core 1: 5118.97 20.00 3124.35 920.42 7588.98 00:11:20.728 ======================================================== 00:11:20.728 Total : 30713.80 119.98 3124.48 920.42 7752.23 00:11:20.728 00:11:22.631 Initializing NVMe Controllers 00:11:22.631 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:22.631 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:22.631 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:22.631 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:22.631 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:11:22.631 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:11:22.631 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:11:22.631 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:11:22.631 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:11:22.631 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:11:22.631 Initialization complete. Launching workers. 00:11:22.631 ======================================================== 00:11:22.631 Latency(us) 00:11:22.631 Device Information : IOPS MiB/s Average min max 00:11:22.631 PCIE (0000:00:06.0) NSID 1 from core 2: 3503.94 13.69 4564.65 982.16 16165.04 00:11:22.631 PCIE (0000:00:07.0) NSID 1 from core 2: 3503.94 13.69 4565.82 1035.28 16789.72 00:11:22.631 PCIE (0000:00:09.0) NSID 1 from core 2: 3503.94 13.69 4565.58 1034.78 17275.36 00:11:22.631 PCIE (0000:00:08.0) NSID 1 from core 2: 3503.94 13.69 4565.48 981.51 14225.68 00:11:22.631 PCIE (0000:00:08.0) NSID 2 from core 2: 3503.94 13.69 4565.24 906.61 16659.92 00:11:22.631 PCIE (0000:00:08.0) NSID 3 from core 2: 3503.94 13.69 4565.33 783.95 16222.74 00:11:22.631 ======================================================== 00:11:22.631 Total : 21023.63 82.12 4565.35 783.95 17275.36 00:11:22.631 00:11:22.631 21:02:36 -- nvme/nvme.sh@65 -- # wait 65055 00:11:22.631 21:02:36 -- nvme/nvme.sh@66 -- # wait 65056 00:11:22.632 00:11:22.632 real 0m10.827s 00:11:22.632 user 0m19.066s 00:11:22.632 sys 0m0.841s 00:11:22.632 21:02:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.632 21:02:36 -- common/autotest_common.sh@10 -- # set +x 00:11:22.632 ************************************ 00:11:22.632 END TEST nvme_multi_secondary 00:11:22.632 ************************************ 00:11:22.632 21:02:36 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:22.632 21:02:36 -- nvme/nvme.sh@102 -- # kill_stub 00:11:22.632 21:02:36 -- common/autotest_common.sh@1065 -- # [[ -e /proc/63977 ]] 00:11:22.632 21:02:36 -- common/autotest_common.sh@1066 -- # kill 63977 00:11:22.632 21:02:36 -- common/autotest_common.sh@1067 -- # wait 63977 00:11:23.568 [2024-07-13 21:02:37.246930] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:23.568 [2024-07-13 21:02:37.247015] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:23.568 [2024-07-13 21:02:37.247041] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:23.568 [2024-07-13 21:02:37.247079] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:24.136 [2024-07-13 21:02:37.766382] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:24.136 [2024-07-13 21:02:37.766465] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:24.136 [2024-07-13 21:02:37.766491] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:24.136 [2024-07-13 21:02:37.766513] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:24.395 [2024-07-13 21:02:38.282549] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:24.395 [2024-07-13 21:02:38.282634] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:24.395 [2024-07-13 21:02:38.282661] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:24.395 [2024-07-13 21:02:38.282685] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:25.772 [2024-07-13 21:02:39.278600] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:25.772 [2024-07-13 21:02:39.278684] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:25.772 [2024-07-13 21:02:39.278711] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:25.772 [2024-07-13 21:02:39.278736] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64931) is not found. Dropping the request. 00:11:25.772 21:02:39 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:11:25.772 21:02:39 -- common/autotest_common.sh@1073 -- # echo 2 00:11:25.772 21:02:39 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:25.772 21:02:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:25.772 21:02:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:25.772 21:02:39 -- common/autotest_common.sh@10 -- # set +x 00:11:25.772 ************************************ 00:11:25.772 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:25.772 ************************************ 00:11:25.772 21:02:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:25.772 * Looking for test storage... 00:11:25.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:25.772 21:02:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:25.772 21:02:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:25.772 21:02:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:25.772 21:02:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:25.772 21:02:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:25.772 21:02:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:25.772 21:02:39 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:25.772 21:02:39 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:25.772 21:02:39 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:25.772 21:02:39 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:25.772 21:02:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:25.772 21:02:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:25.772 21:02:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:25.772 21:02:39 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:25.772 21:02:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:25.772 21:02:39 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:25.772 21:02:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:25.772 21:02:39 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:11:25.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.772 21:02:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:11:25.772 21:02:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:11:25.772 21:02:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65240 00:11:25.772 21:02:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:25.772 21:02:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:25.772 21:02:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65240 00:11:25.772 21:02:39 -- common/autotest_common.sh@819 -- # '[' -z 65240 ']' 00:11:25.772 21:02:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.772 21:02:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:25.772 21:02:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.772 21:02:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:25.772 21:02:39 -- common/autotest_common.sh@10 -- # set +x 00:11:26.031 [2024-07-13 21:02:39.779556] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:26.031 [2024-07-13 21:02:39.779918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65240 ] 00:11:26.289 [2024-07-13 21:02:39.956458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.289 [2024-07-13 21:02:40.188532] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:26.289 [2024-07-13 21:02:40.189192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.289 [2024-07-13 21:02:40.189343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.289 [2024-07-13 21:02:40.189416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.289 [2024-07-13 21:02:40.189433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.662 21:02:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:27.662 21:02:41 -- common/autotest_common.sh@852 -- # return 0 00:11:27.662 21:02:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:11:27.662 21:02:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.662 21:02:41 -- common/autotest_common.sh@10 -- # set +x 00:11:27.662 nvme0n1 00:11:27.662 21:02:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.662 21:02:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:27.662 21:02:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_Gz1sf.txt 00:11:27.662 21:02:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:27.662 21:02:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.662 21:02:41 -- common/autotest_common.sh@10 -- # set +x 00:11:27.662 true 00:11:27.662 21:02:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.662 21:02:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:27.662 21:02:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720904561 00:11:27.662 21:02:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65276 00:11:27.662 21:02:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:27.662 21:02:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:27.662 21:02:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:30.189 21:02:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.189 21:02:43 -- common/autotest_common.sh@10 -- # set +x 00:11:30.189 [2024-07-13 21:02:43.570380] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:30.189 [2024-07-13 21:02:43.570788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:30.189 [2024-07-13 21:02:43.570827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:30.189 [2024-07-13 21:02:43.570863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.189 [2024-07-13 21:02:43.572793] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:30.189 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65276 00:11:30.189 21:02:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65276 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65276 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:30.189 21:02:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.189 21:02:43 -- common/autotest_common.sh@10 -- # set +x 00:11:30.189 21:02:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_Gz1sf.txt 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_Gz1sf.txt 00:11:30.189 21:02:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65240 00:11:30.189 21:02:43 -- common/autotest_common.sh@926 -- # '[' -z 65240 ']' 00:11:30.189 21:02:43 -- common/autotest_common.sh@930 -- # kill -0 65240 00:11:30.189 21:02:43 -- common/autotest_common.sh@931 -- # uname 00:11:30.189 21:02:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:30.189 21:02:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65240 00:11:30.189 killing process with pid 65240 00:11:30.189 21:02:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:30.190 21:02:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:30.190 21:02:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65240' 00:11:30.190 21:02:43 -- common/autotest_common.sh@945 -- # kill 65240 00:11:30.190 21:02:43 -- common/autotest_common.sh@950 -- # wait 65240 00:11:32.090 21:02:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:32.090 21:02:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:32.090 00:11:32.090 real 0m6.072s 00:11:32.090 user 0m21.660s 00:11:32.090 sys 0m0.610s 00:11:32.090 21:02:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.090 ************************************ 00:11:32.090 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:32.090 ************************************ 00:11:32.090 21:02:45 -- common/autotest_common.sh@10 -- # set +x 00:11:32.090 21:02:45 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:32.090 21:02:45 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:32.090 21:02:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:32.090 21:02:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:32.090 21:02:45 -- common/autotest_common.sh@10 -- # set +x 00:11:32.090 ************************************ 00:11:32.090 START TEST nvme_fio 00:11:32.090 ************************************ 00:11:32.090 21:02:45 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:11:32.090 21:02:45 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:32.090 21:02:45 -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:32.090 21:02:45 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:32.090 21:02:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:32.090 21:02:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:32.090 21:02:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:32.090 21:02:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:32.090 21:02:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:32.090 21:02:45 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:32.090 21:02:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:32.090 21:02:45 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:11:32.090 21:02:45 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:32.090 21:02:45 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:32.090 21:02:45 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:32.090 21:02:45 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:32.090 21:02:46 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:32.090 21:02:46 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:32.656 21:02:46 -- nvme/nvme.sh@41 -- # bs=4096 00:11:32.656 21:02:46 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:32.656 21:02:46 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:32.656 21:02:46 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:32.656 21:02:46 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:32.656 21:02:46 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:32.656 21:02:46 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:32.656 21:02:46 -- common/autotest_common.sh@1320 -- # shift 00:11:32.656 21:02:46 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:32.656 21:02:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:32.656 21:02:46 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:32.656 21:02:46 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:32.656 21:02:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:32.656 21:02:46 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:32.656 21:02:46 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:32.656 21:02:46 -- common/autotest_common.sh@1326 -- # break 00:11:32.656 21:02:46 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:32.656 21:02:46 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:32.656 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:32.656 fio-3.35 00:11:32.656 Starting 1 thread 00:11:35.941 00:11:35.941 test: (groupid=0, jobs=1): err= 0: pid=65426: Sat Jul 13 21:02:49 2024 00:11:35.941 read: IOPS=15.3k, BW=59.8MiB/s (62.7MB/s)(120MiB/2001msec) 00:11:35.941 slat (nsec): min=3971, max=86059, avg=6194.53, stdev=3137.00 00:11:35.941 clat (usec): min=509, max=9868, avg=4152.78, stdev=696.28 00:11:35.941 lat (usec): min=515, max=9915, avg=4158.97, stdev=697.05 00:11:35.941 clat percentiles (usec): 00:11:35.941 | 1.00th=[ 2966], 5.00th=[ 3359], 10.00th=[ 3490], 20.00th=[ 3621], 00:11:35.941 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 4047], 60.00th=[ 4228], 00:11:35.941 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 4883], 95.00th=[ 5080], 00:11:35.941 | 99.00th=[ 6587], 99.50th=[ 8094], 99.90th=[ 9503], 99.95th=[ 9634], 00:11:35.941 | 99.99th=[ 9765] 00:11:35.941 bw ( KiB/s): min=56112, max=63352, per=98.71%, avg=60477.33, stdev=3843.30, samples=3 00:11:35.941 iops : min=14028, max=15838, avg=15119.33, stdev=960.83, samples=3 00:11:35.941 write: IOPS=15.3k, BW=59.9MiB/s (62.8MB/s)(120MiB/2001msec); 0 zone resets 00:11:35.941 slat (usec): min=4, max=111, avg= 6.33, stdev= 3.14 00:11:35.941 clat (usec): min=328, max=9883, avg=4166.90, stdev=687.77 00:11:35.941 lat (usec): min=346, max=9889, avg=4173.23, stdev=688.52 00:11:35.941 clat percentiles (usec): 00:11:35.941 | 1.00th=[ 2999], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3654], 00:11:35.941 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 4080], 60.00th=[ 4293], 00:11:35.941 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 4883], 95.00th=[ 5145], 00:11:35.941 | 99.00th=[ 6718], 99.50th=[ 7767], 99.90th=[ 9503], 99.95th=[ 9634], 00:11:35.941 | 99.99th=[ 9765] 00:11:35.941 bw ( KiB/s): min=55416, max=62480, per=97.97%, avg=60109.33, stdev=4064.62, samples=3 00:11:35.941 iops : min=13854, max=15620, avg=15027.33, stdev=1016.15, samples=3 00:11:35.941 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:35.941 lat (msec) : 2=0.06%, 4=46.38%, 10=53.52% 00:11:35.941 cpu : usr=98.80%, sys=0.00%, ctx=4, majf=0, minf=607 00:11:35.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:35.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:35.941 issued rwts: total=30648,30693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:35.941 00:11:35.941 Run status group 0 (all jobs): 00:11:35.941 READ: bw=59.8MiB/s (62.7MB/s), 59.8MiB/s-59.8MiB/s (62.7MB/s-62.7MB/s), io=120MiB (126MB), run=2001-2001msec 00:11:35.941 WRITE: bw=59.9MiB/s (62.8MB/s), 59.9MiB/s-59.9MiB/s (62.8MB/s-62.8MB/s), io=120MiB (126MB), run=2001-2001msec 00:11:35.941 ----------------------------------------------------- 00:11:35.941 Suppressions used: 00:11:35.941 count bytes template 00:11:35.941 1 32 /usr/src/fio/parse.c 00:11:35.941 1 8 libtcmalloc_minimal.so 00:11:35.941 ----------------------------------------------------- 00:11:35.942 00:11:35.942 21:02:49 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:35.942 21:02:49 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:35.942 21:02:49 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:35.942 21:02:49 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:36.200 21:02:49 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:36.200 21:02:49 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:36.459 21:02:50 -- nvme/nvme.sh@41 -- # bs=4096 00:11:36.459 21:02:50 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:36.459 21:02:50 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:36.459 21:02:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:36.459 21:02:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:36.459 21:02:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:36.459 21:02:50 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:36.459 21:02:50 -- common/autotest_common.sh@1320 -- # shift 00:11:36.459 21:02:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:36.459 21:02:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:36.459 21:02:50 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:36.459 21:02:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:36.459 21:02:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:36.459 21:02:50 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:36.459 21:02:50 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:36.459 21:02:50 -- common/autotest_common.sh@1326 -- # break 00:11:36.459 21:02:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:36.459 21:02:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:36.719 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:36.719 fio-3.35 00:11:36.719 Starting 1 thread 00:11:40.023 00:11:40.023 test: (groupid=0, jobs=1): err= 0: pid=65484: Sat Jul 13 21:02:53 2024 00:11:40.023 read: IOPS=16.4k, BW=64.1MiB/s (67.2MB/s)(128MiB/2001msec) 00:11:40.023 slat (usec): min=4, max=206, avg= 5.89, stdev= 2.48 00:11:40.023 clat (usec): min=247, max=10474, avg=3873.46, stdev=473.60 00:11:40.023 lat (usec): min=252, max=10535, avg=3879.35, stdev=474.27 00:11:40.023 clat percentiles (usec): 00:11:40.023 | 1.00th=[ 3195], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3556], 00:11:40.023 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:11:40.023 | 70.00th=[ 3916], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4621], 00:11:40.023 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 8094], 99.95th=[ 8979], 00:11:40.023 | 99.99th=[10159] 00:11:40.023 bw ( KiB/s): min=61776, max=68472, per=98.92%, avg=64925.33, stdev=3365.64, samples=3 00:11:40.023 iops : min=15444, max=17118, avg=16231.33, stdev=841.41, samples=3 00:11:40.023 write: IOPS=16.4k, BW=64.2MiB/s (67.3MB/s)(128MiB/2001msec); 0 zone resets 00:11:40.023 slat (usec): min=4, max=182, avg= 6.06, stdev= 2.36 00:11:40.023 clat (usec): min=224, max=10284, avg=3890.57, stdev=485.14 00:11:40.023 lat (usec): min=229, max=10296, avg=3896.63, stdev=485.79 00:11:40.023 clat percentiles (usec): 00:11:40.023 | 1.00th=[ 3195], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3589], 00:11:40.023 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:11:40.023 | 70.00th=[ 3916], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4686], 00:11:40.023 | 99.00th=[ 4883], 99.50th=[ 5407], 99.90th=[ 8094], 99.95th=[ 8979], 00:11:40.023 | 99.99th=[10028] 00:11:40.023 bw ( KiB/s): min=62144, max=68088, per=98.38%, avg=64674.67, stdev=3068.73, samples=3 00:11:40.023 iops : min=15536, max=17022, avg=16168.67, stdev=767.18, samples=3 00:11:40.023 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:40.023 lat (msec) : 2=0.16%, 4=72.87%, 10=26.92%, 20=0.02% 00:11:40.023 cpu : usr=98.45%, sys=0.30%, ctx=26, majf=0, minf=607 00:11:40.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:40.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:40.023 issued rwts: total=32832,32886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:40.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:40.023 00:11:40.023 Run status group 0 (all jobs): 00:11:40.023 READ: bw=64.1MiB/s (67.2MB/s), 64.1MiB/s-64.1MiB/s (67.2MB/s-67.2MB/s), io=128MiB (134MB), run=2001-2001msec 00:11:40.023 WRITE: bw=64.2MiB/s (67.3MB/s), 64.2MiB/s-64.2MiB/s (67.3MB/s-67.3MB/s), io=128MiB (135MB), run=2001-2001msec 00:11:40.023 ----------------------------------------------------- 00:11:40.023 Suppressions used: 00:11:40.023 count bytes template 00:11:40.023 1 32 /usr/src/fio/parse.c 00:11:40.023 1 8 libtcmalloc_minimal.so 00:11:40.023 ----------------------------------------------------- 00:11:40.023 00:11:40.023 21:02:53 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:40.023 21:02:53 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:40.023 21:02:53 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:40.023 21:02:53 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:40.281 21:02:54 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:40.281 21:02:54 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:40.538 21:02:54 -- nvme/nvme.sh@41 -- # bs=4096 00:11:40.538 21:02:54 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:40.538 21:02:54 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:40.538 21:02:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:40.538 21:02:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:40.538 21:02:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:40.538 21:02:54 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:40.538 21:02:54 -- common/autotest_common.sh@1320 -- # shift 00:11:40.538 21:02:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:40.538 21:02:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:40.538 21:02:54 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:40.538 21:02:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:40.538 21:02:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:40.538 21:02:54 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:40.538 21:02:54 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:40.538 21:02:54 -- common/autotest_common.sh@1326 -- # break 00:11:40.538 21:02:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:40.538 21:02:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:40.795 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:40.796 fio-3.35 00:11:40.796 Starting 1 thread 00:11:44.080 00:11:44.080 test: (groupid=0, jobs=1): err= 0: pid=65556: Sat Jul 13 21:02:57 2024 00:11:44.080 read: IOPS=16.3k, BW=63.6MiB/s (66.6MB/s)(127MiB/2001msec) 00:11:44.080 slat (usec): min=4, max=597, avg= 6.00, stdev= 4.00 00:11:44.080 clat (usec): min=299, max=10176, avg=3906.17, stdev=564.77 00:11:44.080 lat (usec): min=305, max=10264, avg=3912.16, stdev=565.48 00:11:44.080 clat percentiles (usec): 00:11:44.080 | 1.00th=[ 3130], 5.00th=[ 3294], 10.00th=[ 3392], 20.00th=[ 3490], 00:11:44.080 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 4080], 00:11:44.080 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:11:44.080 | 99.00th=[ 6194], 99.50th=[ 7242], 99.90th=[ 7898], 99.95th=[ 8848], 00:11:44.080 | 99.99th=[10028] 00:11:44.080 bw ( KiB/s): min=62688, max=69152, per=100.00%, avg=65698.67, stdev=3254.66, samples=3 00:11:44.080 iops : min=15672, max=17288, avg=16424.67, stdev=813.66, samples=3 00:11:44.080 write: IOPS=16.3k, BW=63.7MiB/s (66.8MB/s)(127MiB/2001msec); 0 zone resets 00:11:44.080 slat (usec): min=4, max=283, avg= 6.12, stdev= 2.37 00:11:44.080 clat (usec): min=272, max=10103, avg=3921.29, stdev=584.54 00:11:44.080 lat (usec): min=278, max=10115, avg=3927.40, stdev=585.26 00:11:44.080 clat percentiles (usec): 00:11:44.080 | 1.00th=[ 3130], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3490], 00:11:44.080 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 4080], 00:11:44.080 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:11:44.080 | 99.00th=[ 6456], 99.50th=[ 7570], 99.90th=[ 7963], 99.95th=[ 8979], 00:11:44.080 | 99.99th=[ 9896] 00:11:44.080 bw ( KiB/s): min=63048, max=68952, per=100.00%, avg=65509.33, stdev=3071.90, samples=3 00:11:44.080 iops : min=15762, max=17238, avg=16377.33, stdev=767.97, samples=3 00:11:44.080 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:44.080 lat (msec) : 2=0.09%, 4=56.69%, 10=43.17%, 20=0.01% 00:11:44.080 cpu : usr=98.10%, sys=0.55%, ctx=16, majf=0, minf=608 00:11:44.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:44.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:44.080 issued rwts: total=32556,32635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:44.080 00:11:44.080 Run status group 0 (all jobs): 00:11:44.080 READ: bw=63.6MiB/s (66.6MB/s), 63.6MiB/s-63.6MiB/s (66.6MB/s-66.6MB/s), io=127MiB (133MB), run=2001-2001msec 00:11:44.080 WRITE: bw=63.7MiB/s (66.8MB/s), 63.7MiB/s-63.7MiB/s (66.8MB/s-66.8MB/s), io=127MiB (134MB), run=2001-2001msec 00:11:44.338 ----------------------------------------------------- 00:11:44.338 Suppressions used: 00:11:44.338 count bytes template 00:11:44.338 1 32 /usr/src/fio/parse.c 00:11:44.338 1 8 libtcmalloc_minimal.so 00:11:44.338 ----------------------------------------------------- 00:11:44.338 00:11:44.338 21:02:58 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:44.338 21:02:58 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:44.338 21:02:58 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:44.338 21:02:58 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:44.597 21:02:58 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:44.597 21:02:58 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:44.855 21:02:58 -- nvme/nvme.sh@41 -- # bs=4096 00:11:44.855 21:02:58 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:44.855 21:02:58 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:44.855 21:02:58 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:44.855 21:02:58 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:44.855 21:02:58 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:44.855 21:02:58 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:44.855 21:02:58 -- common/autotest_common.sh@1320 -- # shift 00:11:44.855 21:02:58 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:44.855 21:02:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:44.855 21:02:58 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:44.855 21:02:58 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:44.855 21:02:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:44.855 21:02:58 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:44.855 21:02:58 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:44.855 21:02:58 -- common/autotest_common.sh@1326 -- # break 00:11:44.855 21:02:58 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:44.855 21:02:58 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:45.114 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:45.114 fio-3.35 00:11:45.114 Starting 1 thread 00:11:49.309 00:11:49.309 test: (groupid=0, jobs=1): err= 0: pid=65621: Sat Jul 13 21:03:03 2024 00:11:49.309 read: IOPS=16.1k, BW=62.8MiB/s (65.9MB/s)(126MiB/2001msec) 00:11:49.309 slat (nsec): min=4554, max=55222, avg=5959.27, stdev=1828.63 00:11:49.309 clat (usec): min=333, max=8624, avg=3955.06, stdev=470.25 00:11:49.309 lat (usec): min=339, max=8680, avg=3961.02, stdev=470.98 00:11:49.309 clat percentiles (usec): 00:11:49.309 | 1.00th=[ 3425], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:11:49.309 | 30.00th=[ 3720], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3884], 00:11:49.309 | 70.00th=[ 3982], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4686], 00:11:49.309 | 99.00th=[ 5145], 99.50th=[ 6783], 99.90th=[ 7898], 99.95th=[ 8029], 00:11:49.309 | 99.99th=[ 8455] 00:11:49.309 bw ( KiB/s): min=61408, max=68888, per=99.59%, avg=64082.67, stdev=4170.42, samples=3 00:11:49.309 iops : min=15352, max=17222, avg=16020.67, stdev=1042.61, samples=3 00:11:49.309 write: IOPS=16.1k, BW=63.0MiB/s (66.0MB/s)(126MiB/2001msec); 0 zone resets 00:11:49.309 slat (nsec): min=4619, max=46453, avg=6096.04, stdev=1783.31 00:11:49.309 clat (usec): min=306, max=8477, avg=3963.40, stdev=468.20 00:11:49.309 lat (usec): min=313, max=8489, avg=3969.49, stdev=468.89 00:11:49.309 clat percentiles (usec): 00:11:49.309 | 1.00th=[ 3425], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:11:49.309 | 30.00th=[ 3720], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3884], 00:11:49.309 | 70.00th=[ 3982], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4686], 00:11:49.309 | 99.00th=[ 5145], 99.50th=[ 6718], 99.90th=[ 7832], 99.95th=[ 8029], 00:11:49.309 | 99.99th=[ 8225] 00:11:49.309 bw ( KiB/s): min=61400, max=68288, per=98.97%, avg=63818.67, stdev=3874.93, samples=3 00:11:49.309 iops : min=15350, max=17072, avg=15954.67, stdev=968.73, samples=3 00:11:49.309 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:49.309 lat (msec) : 2=0.04%, 4=71.44%, 10=28.48% 00:11:49.309 cpu : usr=99.05%, sys=0.05%, ctx=2, majf=0, minf=605 00:11:49.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:49.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.309 issued rwts: total=32189,32257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.309 00:11:49.309 Run status group 0 (all jobs): 00:11:49.309 READ: bw=62.8MiB/s (65.9MB/s), 62.8MiB/s-62.8MiB/s (65.9MB/s-65.9MB/s), io=126MiB (132MB), run=2001-2001msec 00:11:49.309 WRITE: bw=63.0MiB/s (66.0MB/s), 63.0MiB/s-63.0MiB/s (66.0MB/s-66.0MB/s), io=126MiB (132MB), run=2001-2001msec 00:11:49.309 ----------------------------------------------------- 00:11:49.309 Suppressions used: 00:11:49.309 count bytes template 00:11:49.309 1 32 /usr/src/fio/parse.c 00:11:49.309 1 8 libtcmalloc_minimal.so 00:11:49.309 ----------------------------------------------------- 00:11:49.309 00:11:49.309 21:03:03 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:49.309 21:03:03 -- nvme/nvme.sh@46 -- # true 00:11:49.309 00:11:49.309 real 0m17.550s 00:11:49.309 user 0m13.825s 00:11:49.309 sys 0m2.980s 00:11:49.309 21:03:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.309 21:03:03 -- common/autotest_common.sh@10 -- # set +x 00:11:49.309 ************************************ 00:11:49.309 END TEST nvme_fio 00:11:49.309 ************************************ 00:11:49.567 00:11:49.567 real 1m34.442s 00:11:49.567 user 3m48.093s 00:11:49.567 sys 0m14.805s 00:11:49.567 21:03:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.567 21:03:03 -- common/autotest_common.sh@10 -- # set +x 00:11:49.567 ************************************ 00:11:49.567 END TEST nvme 00:11:49.567 ************************************ 00:11:49.567 21:03:03 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:49.568 21:03:03 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:49.568 21:03:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:49.568 21:03:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:49.568 21:03:03 -- common/autotest_common.sh@10 -- # set +x 00:11:49.568 ************************************ 00:11:49.568 START TEST nvme_scc 00:11:49.568 ************************************ 00:11:49.568 21:03:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:49.568 * Looking for test storage... 00:11:49.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:49.568 21:03:03 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:49.568 21:03:03 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:49.568 21:03:03 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:49.568 21:03:03 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:49.568 21:03:03 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.568 21:03:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.568 21:03:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.568 21:03:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.568 21:03:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.568 21:03:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.568 21:03:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.568 21:03:03 -- paths/export.sh@5 -- # export PATH 00:11:49.568 21:03:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.568 21:03:03 -- nvme/functions.sh@10 -- # ctrls=() 00:11:49.568 21:03:03 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:49.568 21:03:03 -- nvme/functions.sh@11 -- # nvmes=() 00:11:49.568 21:03:03 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:49.568 21:03:03 -- nvme/functions.sh@12 -- # bdfs=() 00:11:49.568 21:03:03 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:49.568 21:03:03 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:49.568 21:03:03 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:49.568 21:03:03 -- nvme/functions.sh@14 -- # nvme_name= 00:11:49.568 21:03:03 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.568 21:03:03 -- nvme/nvme_scc.sh@12 -- # uname 00:11:49.568 21:03:03 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:49.568 21:03:03 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:49.568 21:03:03 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:50.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:50.133 Waiting for block devices as requested 00:11:50.133 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:50.133 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:50.391 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:50.391 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:55.660 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:55.661 21:03:09 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:55.661 21:03:09 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:55.661 21:03:09 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:55.661 21:03:09 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:55.661 21:03:09 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:55.661 21:03:09 -- scripts/common.sh@15 -- # local i 00:11:55.661 21:03:09 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:55.661 21:03:09 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:55.661 21:03:09 -- scripts/common.sh@24 -- # return 0 00:11:55.661 21:03:09 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:55.661 21:03:09 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:55.661 21:03:09 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@18 -- # shift 00:11:55.661 21:03:09 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.661 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:55.661 21:03:09 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:55.661 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.662 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.662 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:55.662 21:03:09 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:55.663 21:03:09 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:55.663 21:03:09 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:55.663 21:03:09 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:55.663 21:03:09 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:55.663 21:03:09 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:55.663 21:03:09 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:55.663 21:03:09 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:55.663 21:03:09 -- scripts/common.sh@15 -- # local i 00:11:55.663 21:03:09 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:55.663 21:03:09 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:55.663 21:03:09 -- scripts/common.sh@24 -- # return 0 00:11:55.663 21:03:09 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:55.663 21:03:09 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:55.663 21:03:09 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@18 -- # shift 00:11:55.663 21:03:09 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.663 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:55.663 21:03:09 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.663 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.664 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.664 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:55.664 21:03:09 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.665 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:55.665 21:03:09 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.665 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:55.666 21:03:09 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:55.666 21:03:09 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:55.666 21:03:09 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:55.666 21:03:09 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@18 -- # shift 00:11:55.666 21:03:09 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.666 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:55.666 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:55.666 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:55.667 21:03:09 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:55.667 21:03:09 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:55.667 21:03:09 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:55.667 21:03:09 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@18 -- # shift 00:11:55.667 21:03:09 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.667 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:55.667 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:55.667 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.668 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:55.668 21:03:09 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.668 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:55.669 21:03:09 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:55.669 21:03:09 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:55.669 21:03:09 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:55.669 21:03:09 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@18 -- # shift 00:11:55.669 21:03:09 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.669 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:55.669 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.669 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:55.670 21:03:09 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:55.670 21:03:09 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:55.670 21:03:09 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:55.670 21:03:09 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:55.670 21:03:09 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:55.670 21:03:09 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:55.670 21:03:09 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:55.670 21:03:09 -- scripts/common.sh@15 -- # local i 00:11:55.670 21:03:09 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:55.670 21:03:09 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:55.670 21:03:09 -- scripts/common.sh@24 -- # return 0 00:11:55.670 21:03:09 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:55.670 21:03:09 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:55.670 21:03:09 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@18 -- # shift 00:11:55.670 21:03:09 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.670 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.670 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.670 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.671 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:55.671 21:03:09 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.671 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.672 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:55.672 21:03:09 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.672 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:55.934 21:03:09 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:55.934 21:03:09 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:55.934 21:03:09 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:55.934 21:03:09 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@18 -- # shift 00:11:55.934 21:03:09 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.934 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:55.934 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:55.934 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.935 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.935 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:55.935 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:55.936 21:03:09 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:55.936 21:03:09 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:55.936 21:03:09 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:55.936 21:03:09 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:55.936 21:03:09 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:55.936 21:03:09 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:55.936 21:03:09 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:55.936 21:03:09 -- scripts/common.sh@15 -- # local i 00:11:55.936 21:03:09 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:55.936 21:03:09 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:55.936 21:03:09 -- scripts/common.sh@24 -- # return 0 00:11:55.936 21:03:09 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:55.936 21:03:09 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:55.936 21:03:09 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@18 -- # shift 00:11:55.936 21:03:09 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.936 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:55.936 21:03:09 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:55.936 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.937 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:55.937 21:03:09 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.937 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:55.938 21:03:09 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:55.938 21:03:09 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:55.938 21:03:09 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:55.938 21:03:09 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@18 -- # shift 00:11:55.938 21:03:09 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.938 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:55.938 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:55.938 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.939 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.939 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:55.939 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.940 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:55.940 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.940 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:55.940 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.940 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:55.940 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.940 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:55.940 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.940 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:55.940 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.940 21:03:09 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:55.940 21:03:09 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # IFS=: 00:11:55.940 21:03:09 -- nvme/functions.sh@21 -- # read -r reg val 00:11:55.940 21:03:09 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:55.940 21:03:09 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:55.940 21:03:09 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:55.940 21:03:09 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:55.940 21:03:09 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:55.940 21:03:09 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:55.940 21:03:09 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:55.940 21:03:09 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:55.940 21:03:09 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:55.940 21:03:09 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:55.940 21:03:09 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:55.940 21:03:09 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:55.940 21:03:09 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:55.940 21:03:09 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:55.940 21:03:09 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:55.940 21:03:09 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:55.940 21:03:09 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:55.940 21:03:09 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:55.940 21:03:09 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:55.940 21:03:09 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:55.940 21:03:09 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:55.940 21:03:09 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:55.940 21:03:09 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:55.940 21:03:09 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:55.940 21:03:09 -- nvme/functions.sh@197 -- # echo nvme1 00:11:55.940 21:03:09 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:55.940 21:03:09 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:55.940 21:03:09 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:55.940 21:03:09 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:55.940 21:03:09 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:55.940 21:03:09 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:55.940 21:03:09 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:55.940 21:03:09 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:55.940 21:03:09 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:55.940 21:03:09 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:55.940 21:03:09 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:55.940 21:03:09 -- nvme/functions.sh@197 -- # echo nvme0 00:11:55.940 21:03:09 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:55.940 21:03:09 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:55.940 21:03:09 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:55.940 21:03:09 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:55.940 21:03:09 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:55.940 21:03:09 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:55.940 21:03:09 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:55.940 21:03:09 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:55.940 21:03:09 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:55.940 21:03:09 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:55.940 21:03:09 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:55.940 21:03:09 -- nvme/functions.sh@197 -- # echo nvme3 00:11:55.940 21:03:09 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:55.940 21:03:09 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:55.940 21:03:09 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:55.940 21:03:09 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:55.940 21:03:09 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:55.940 21:03:09 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:55.940 21:03:09 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:55.940 21:03:09 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:55.940 21:03:09 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:55.940 21:03:09 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:55.940 21:03:09 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:55.940 21:03:09 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:55.940 21:03:09 -- nvme/functions.sh@197 -- # echo nvme2 00:11:55.940 21:03:09 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:55.940 21:03:09 -- nvme/functions.sh@206 -- # echo nvme1 00:11:55.940 21:03:09 -- nvme/functions.sh@207 -- # return 0 00:11:55.940 21:03:09 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:55.940 21:03:09 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:55.940 21:03:09 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:56.876 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:56.876 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:56.876 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.134 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.134 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.134 21:03:10 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:57.134 21:03:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:57.134 21:03:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:57.134 21:03:10 -- common/autotest_common.sh@10 -- # set +x 00:11:57.134 ************************************ 00:11:57.134 START TEST nvme_simple_copy 00:11:57.134 ************************************ 00:11:57.134 21:03:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:57.392 Initializing NVMe Controllers 00:11:57.392 Attaching to 0000:00:08.0 00:11:57.392 Controller supports SCC. Attached to 0000:00:08.0 00:11:57.392 Namespace ID: 1 size: 4GB 00:11:57.392 Initialization complete. 00:11:57.392 00:11:57.392 Controller QEMU NVMe Ctrl (12342 ) 00:11:57.392 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:57.392 Namespace Block Size:4096 00:11:57.392 Writing LBAs 0 to 63 with Random Data 00:11:57.392 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:57.392 LBAs matching Written Data: 64 00:11:57.392 00:11:57.392 real 0m0.299s 00:11:57.392 user 0m0.118s 00:11:57.392 sys 0m0.079s 00:11:57.392 21:03:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.392 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:11:57.392 ************************************ 00:11:57.392 END TEST nvme_simple_copy 00:11:57.392 ************************************ 00:11:57.392 00:11:57.392 real 0m7.978s 00:11:57.392 user 0m1.300s 00:11:57.392 sys 0m1.706s 00:11:57.393 21:03:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.393 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:11:57.393 ************************************ 00:11:57.393 END TEST nvme_scc 00:11:57.393 ************************************ 00:11:57.651 21:03:11 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:11:57.651 21:03:11 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:11:57.651 21:03:11 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:11:57.651 21:03:11 -- spdk/autotest.sh@238 -- # [[ 1 -eq 1 ]] 00:11:57.651 21:03:11 -- spdk/autotest.sh@239 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:57.651 21:03:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:57.651 21:03:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:57.651 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:11:57.651 ************************************ 00:11:57.651 START TEST nvme_fdp 00:11:57.651 ************************************ 00:11:57.651 21:03:11 -- common/autotest_common.sh@1104 -- # test/nvme/nvme_fdp.sh 00:11:57.651 * Looking for test storage... 00:11:57.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:57.651 21:03:11 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:57.651 21:03:11 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:57.651 21:03:11 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:57.651 21:03:11 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:57.651 21:03:11 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:57.651 21:03:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.651 21:03:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.651 21:03:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.651 21:03:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.651 21:03:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.651 21:03:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.651 21:03:11 -- paths/export.sh@5 -- # export PATH 00:11:57.651 21:03:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.651 21:03:11 -- nvme/functions.sh@10 -- # ctrls=() 00:11:57.651 21:03:11 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:57.651 21:03:11 -- nvme/functions.sh@11 -- # nvmes=() 00:11:57.651 21:03:11 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:57.651 21:03:11 -- nvme/functions.sh@12 -- # bdfs=() 00:11:57.651 21:03:11 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:57.651 21:03:11 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:57.651 21:03:11 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:57.651 21:03:11 -- nvme/functions.sh@14 -- # nvme_name= 00:11:57.651 21:03:11 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:57.651 21:03:11 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:58.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:58.218 Waiting for block devices as requested 00:11:58.218 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.218 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.476 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.476 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:03.750 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:03.751 21:03:17 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:03.751 21:03:17 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:03.751 21:03:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:03.751 21:03:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:12:03.751 21:03:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:12:03.751 21:03:17 -- scripts/common.sh@15 -- # local i 00:12:03.751 21:03:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:12:03.751 21:03:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:03.751 21:03:17 -- scripts/common.sh@24 -- # return 0 00:12:03.751 21:03:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:03.751 21:03:17 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:03.751 21:03:17 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@18 -- # shift 00:12:03.751 21:03:17 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:03.751 21:03:17 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.751 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.751 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:03.752 21:03:17 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.752 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.752 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:03.753 21:03:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:03.753 21:03:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:03.753 21:03:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:12:03.753 21:03:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:03.753 21:03:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:03.753 21:03:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:12:03.753 21:03:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:12:03.753 21:03:17 -- scripts/common.sh@15 -- # local i 00:12:03.753 21:03:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:12:03.753 21:03:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:03.753 21:03:17 -- scripts/common.sh@24 -- # return 0 00:12:03.753 21:03:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:03.753 21:03:17 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:03.753 21:03:17 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@18 -- # shift 00:12:03.753 21:03:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.753 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.753 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.753 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.754 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:03.754 21:03:17 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:03.754 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:03.755 21:03:17 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.755 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.755 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:03.756 21:03:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:03.756 21:03:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:03.756 21:03:17 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:03.756 21:03:17 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@18 -- # shift 00:12:03.756 21:03:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.756 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:03.756 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.756 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:03.757 21:03:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:03.757 21:03:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:12:03.757 21:03:17 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:12:03.757 21:03:17 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@18 -- # shift 00:12:03.757 21:03:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.757 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:12:03.757 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.757 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:03.758 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.758 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.758 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:12:03.759 21:03:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:03.759 21:03:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:12:03.759 21:03:17 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:12:03.759 21:03:17 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@18 -- # shift 00:12:03.759 21:03:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.759 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:12:03.759 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.759 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:12:03.760 21:03:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:03.760 21:03:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:03.760 21:03:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:12:03.760 21:03:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:03.760 21:03:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:03.760 21:03:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:12:03.760 21:03:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:12:03.760 21:03:17 -- scripts/common.sh@15 -- # local i 00:12:03.760 21:03:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:12:03.760 21:03:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:03.760 21:03:17 -- scripts/common.sh@24 -- # return 0 00:12:03.760 21:03:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:03.760 21:03:17 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:03.760 21:03:17 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@18 -- # shift 00:12:03.760 21:03:17 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.760 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:03.760 21:03:17 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:03.760 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.761 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.761 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:03.761 21:03:17 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.762 21:03:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:03.762 21:03:17 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.762 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:03.763 21:03:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:03.763 21:03:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:03.763 21:03:17 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:03.763 21:03:17 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@18 -- # shift 00:12:03.763 21:03:17 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.763 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.763 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.763 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:03.764 21:03:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:03.764 21:03:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:03.764 21:03:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:12:03.764 21:03:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:03.764 21:03:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:03.764 21:03:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:12:03.764 21:03:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:12:03.764 21:03:17 -- scripts/common.sh@15 -- # local i 00:12:03.764 21:03:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:12:03.764 21:03:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:03.764 21:03:17 -- scripts/common.sh@24 -- # return 0 00:12:03.764 21:03:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:03.764 21:03:17 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:03.764 21:03:17 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@18 -- # shift 00:12:03.764 21:03:17 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.764 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:03.764 21:03:17 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:03.764 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.765 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.765 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.765 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:03.766 21:03:17 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:03.766 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.766 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.766 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.766 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:03.766 21:03:17 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:03.766 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.766 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.766 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.766 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:03.766 21:03:17 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:03.766 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:03.766 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:03.766 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:03.766 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.026 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.026 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.026 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.026 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.026 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.026 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.026 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:04.026 21:03:17 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.026 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.026 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:04.027 21:03:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:04.027 21:03:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:12:04.027 21:03:17 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:12:04.027 21:03:17 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@18 -- # shift 00:12:04.027 21:03:17 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:12:04.027 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.027 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.027 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.028 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:04.028 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.028 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.029 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:04.029 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:04.029 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.029 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.029 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:04.029 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:04.029 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.029 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.029 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:04.029 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:04.029 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.029 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.029 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:04.029 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:04.029 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.029 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.029 21:03:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:04.029 21:03:17 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:04.029 21:03:17 -- nvme/functions.sh@21 -- # IFS=: 00:12:04.029 21:03:17 -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.029 21:03:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:12:04.029 21:03:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:04.029 21:03:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:04.029 21:03:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:12:04.029 21:03:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:04.029 21:03:17 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:04.029 21:03:17 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:04.029 21:03:17 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:12:04.029 21:03:17 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:04.029 21:03:17 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:12:04.029 21:03:17 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:04.029 21:03:17 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:12:04.029 21:03:17 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:12:04.029 21:03:17 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:04.029 21:03:17 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:12:04.029 21:03:17 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:12:04.029 21:03:17 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:12:04.029 21:03:17 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:12:04.029 21:03:17 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:04.029 21:03:17 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:04.029 21:03:17 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:04.029 21:03:17 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:04.029 21:03:17 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:04.029 21:03:17 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:04.029 21:03:17 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:04.029 21:03:17 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:12:04.029 21:03:17 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:12:04.029 21:03:17 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:12:04.029 21:03:17 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:12:04.029 21:03:17 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:04.029 21:03:17 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:04.029 21:03:17 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:04.029 21:03:17 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@76 -- # echo 0x88010 00:12:04.029 21:03:17 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:12:04.029 21:03:17 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:04.029 21:03:17 -- nvme/functions.sh@197 -- # echo nvme0 00:12:04.029 21:03:17 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:04.029 21:03:17 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:12:04.029 21:03:17 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:12:04.029 21:03:17 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:12:04.029 21:03:17 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:12:04.029 21:03:17 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:04.029 21:03:17 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:04.029 21:03:17 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:04.029 21:03:17 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:04.029 21:03:17 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:04.029 21:03:17 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:04.029 21:03:17 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:04.029 21:03:17 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:12:04.029 21:03:17 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:12:04.029 21:03:17 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:12:04.029 21:03:17 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:12:04.029 21:03:17 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:04.029 21:03:17 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:04.029 21:03:17 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:04.029 21:03:17 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:04.029 21:03:17 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:04.029 21:03:17 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:04.029 21:03:17 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:04.029 21:03:17 -- nvme/functions.sh@204 -- # trap - ERR 00:12:04.029 21:03:17 -- nvme/functions.sh@204 -- # print_backtrace 00:12:04.029 21:03:17 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:12:04.029 21:03:17 -- common/autotest_common.sh@1132 -- # return 0 00:12:04.029 21:03:17 -- nvme/functions.sh@204 -- # trap - ERR 00:12:04.029 21:03:17 -- nvme/functions.sh@204 -- # print_backtrace 00:12:04.029 21:03:17 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:12:04.029 21:03:17 -- common/autotest_common.sh@1132 -- # return 0 00:12:04.029 21:03:17 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:12:04.029 21:03:17 -- nvme/functions.sh@206 -- # echo nvme0 00:12:04.029 21:03:17 -- nvme/functions.sh@207 -- # return 0 00:12:04.029 21:03:17 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:12:04.029 21:03:17 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:12:04.029 21:03:17 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:04.965 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:05.224 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:05.224 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:05.224 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:05.224 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:05.224 21:03:19 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:12:05.224 21:03:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:05.224 21:03:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:05.224 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.224 ************************************ 00:12:05.224 START TEST nvme_flexible_data_placement 00:12:05.224 ************************************ 00:12:05.224 21:03:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:12:05.483 Initializing NVMe Controllers 00:12:05.483 Attaching to 0000:00:09.0 00:12:05.483 Controller supports FDP Attached to 0000:00:09.0 00:12:05.483 Namespace ID: 1 Endurance Group ID: 1 00:12:05.483 Initialization complete. 00:12:05.483 00:12:05.483 ================================== 00:12:05.483 == FDP tests for Namespace: #01 == 00:12:05.483 ================================== 00:12:05.483 00:12:05.483 Get Feature: FDP: 00:12:05.483 ================= 00:12:05.483 Enabled: Yes 00:12:05.483 FDP configuration Index: 0 00:12:05.483 00:12:05.483 FDP configurations log page 00:12:05.483 =========================== 00:12:05.483 Number of FDP configurations: 1 00:12:05.483 Version: 0 00:12:05.483 Size: 112 00:12:05.483 FDP Configuration Descriptor: 0 00:12:05.483 Descriptor Size: 96 00:12:05.483 Reclaim Group Identifier format: 2 00:12:05.483 FDP Volatile Write Cache: Not Present 00:12:05.483 FDP Configuration: Valid 00:12:05.483 Vendor Specific Size: 0 00:12:05.483 Number of Reclaim Groups: 2 00:12:05.483 Number of Recalim Unit Handles: 8 00:12:05.483 Max Placement Identifiers: 128 00:12:05.483 Number of Namespaces Suppprted: 256 00:12:05.483 Reclaim unit Nominal Size: 6000000 bytes 00:12:05.483 Estimated Reclaim Unit Time Limit: Not Reported 00:12:05.483 RUH Desc #000: RUH Type: Initially Isolated 00:12:05.483 RUH Desc #001: RUH Type: Initially Isolated 00:12:05.483 RUH Desc #002: RUH Type: Initially Isolated 00:12:05.483 RUH Desc #003: RUH Type: Initially Isolated 00:12:05.483 RUH Desc #004: RUH Type: Initially Isolated 00:12:05.483 RUH Desc #005: RUH Type: Initially Isolated 00:12:05.483 RUH Desc #006: RUH Type: Initially Isolated 00:12:05.483 RUH Desc #007: RUH Type: Initially Isolated 00:12:05.483 00:12:05.483 FDP reclaim unit handle usage log page 00:12:05.483 ====================================== 00:12:05.483 Number of Reclaim Unit Handles: 8 00:12:05.483 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:05.483 RUH Usage Desc #001: RUH Attributes: Unused 00:12:05.483 RUH Usage Desc #002: RUH Attributes: Unused 00:12:05.483 RUH Usage Desc #003: RUH Attributes: Unused 00:12:05.483 RUH Usage Desc #004: RUH Attributes: Unused 00:12:05.483 RUH Usage Desc #005: RUH Attributes: Unused 00:12:05.483 RUH Usage Desc #006: RUH Attributes: Unused 00:12:05.483 RUH Usage Desc #007: RUH Attributes: Unused 00:12:05.483 00:12:05.483 FDP statistics log page 00:12:05.483 ======================= 00:12:05.483 Host bytes with metadata written: 759898112 00:12:05.483 Media bytes with metadata written: 760111104 00:12:05.483 Media bytes erased: 0 00:12:05.483 00:12:05.483 FDP Reclaim unit handle status 00:12:05.483 ============================== 00:12:05.483 Number of RUHS descriptors: 2 00:12:05.483 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002b4e 00:12:05.483 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:05.483 00:12:05.483 FDP write on placement id: 0 success 00:12:05.483 00:12:05.483 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:05.483 00:12:05.483 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:05.483 00:12:05.483 Get Feature: FDP Events for Placement handle: #0 00:12:05.483 ======================== 00:12:05.483 Number of FDP Events: 6 00:12:05.483 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:05.483 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:05.483 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:05.483 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:05.483 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:05.483 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:05.483 00:12:05.483 FDP events log page 00:12:05.483 =================== 00:12:05.483 Number of FDP events: 1 00:12:05.483 FDP Event #0: 00:12:05.483 Event Type: RU Not Written to Capacity 00:12:05.483 Placement Identifier: Valid 00:12:05.483 NSID: Valid 00:12:05.483 Location: Valid 00:12:05.483 Placement Identifier: 0 00:12:05.483 Event Timestamp: d 00:12:05.483 Namespace Identifier: 1 00:12:05.483 Reclaim Group Identifier: 0 00:12:05.483 Reclaim Unit Handle Identifier: 0 00:12:05.483 00:12:05.483 FDP test passed 00:12:05.483 00:12:05.483 real 0m0.308s 00:12:05.483 user 0m0.112s 00:12:05.483 sys 0m0.093s 00:12:05.483 21:03:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.483 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.483 ************************************ 00:12:05.483 END TEST nvme_flexible_data_placement 00:12:05.483 ************************************ 00:12:05.742 00:12:05.743 real 0m8.110s 00:12:05.743 user 0m1.393s 00:12:05.743 sys 0m1.747s 00:12:05.743 21:03:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.743 ************************************ 00:12:05.743 END TEST nvme_fdp 00:12:05.743 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.743 ************************************ 00:12:05.743 21:03:19 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:12:05.743 21:03:19 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:05.743 21:03:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:05.743 21:03:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:05.743 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.743 ************************************ 00:12:05.743 START TEST nvme_rpc 00:12:05.743 ************************************ 00:12:05.743 21:03:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:05.743 * Looking for test storage... 00:12:05.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:05.743 21:03:19 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:05.743 21:03:19 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:05.743 21:03:19 -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:05.743 21:03:19 -- common/autotest_common.sh@1509 -- # local bdfs 00:12:05.743 21:03:19 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:05.743 21:03:19 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:05.743 21:03:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:05.743 21:03:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:05.743 21:03:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:05.743 21:03:19 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:05.743 21:03:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:05.743 21:03:19 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:05.743 21:03:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:12:05.743 21:03:19 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:12:05.743 21:03:19 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:12:06.002 21:03:19 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67006 00:12:06.002 21:03:19 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:06.002 21:03:19 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:06.002 21:03:19 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67006 00:12:06.002 21:03:19 -- common/autotest_common.sh@819 -- # '[' -z 67006 ']' 00:12:06.002 21:03:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.002 21:03:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:06.002 21:03:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.002 21:03:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:06.002 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:12:06.002 [2024-07-13 21:03:19.779730] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:06.002 [2024-07-13 21:03:19.780691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67006 ] 00:12:06.260 [2024-07-13 21:03:19.955694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:06.520 [2024-07-13 21:03:20.193003] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:06.520 [2024-07-13 21:03:20.193670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.520 [2024-07-13 21:03:20.193682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.897 21:03:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:07.897 21:03:21 -- common/autotest_common.sh@852 -- # return 0 00:12:07.897 21:03:21 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:12:07.897 Nvme0n1 00:12:08.155 21:03:21 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:08.155 21:03:21 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:08.413 request: 00:12:08.413 { 00:12:08.413 "filename": "non_existing_file", 00:12:08.413 "bdev_name": "Nvme0n1", 00:12:08.413 "method": "bdev_nvme_apply_firmware", 00:12:08.413 "req_id": 1 00:12:08.413 } 00:12:08.413 Got JSON-RPC error response 00:12:08.413 response: 00:12:08.413 { 00:12:08.413 "code": -32603, 00:12:08.413 "message": "open file failed." 00:12:08.413 } 00:12:08.413 21:03:22 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:08.413 21:03:22 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:08.413 21:03:22 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:08.672 21:03:22 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:08.672 21:03:22 -- nvme/nvme_rpc.sh@40 -- # killprocess 67006 00:12:08.672 21:03:22 -- common/autotest_common.sh@926 -- # '[' -z 67006 ']' 00:12:08.672 21:03:22 -- common/autotest_common.sh@930 -- # kill -0 67006 00:12:08.672 21:03:22 -- common/autotest_common.sh@931 -- # uname 00:12:08.672 21:03:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:08.672 21:03:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67006 00:12:08.672 21:03:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:08.672 killing process with pid 67006 00:12:08.672 21:03:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:08.672 21:03:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67006' 00:12:08.672 21:03:22 -- common/autotest_common.sh@945 -- # kill 67006 00:12:08.672 21:03:22 -- common/autotest_common.sh@950 -- # wait 67006 00:12:10.641 00:12:10.641 real 0m4.937s 00:12:10.641 user 0m9.643s 00:12:10.641 sys 0m0.629s 00:12:10.641 21:03:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.641 ************************************ 00:12:10.641 END TEST nvme_rpc 00:12:10.641 ************************************ 00:12:10.641 21:03:24 -- common/autotest_common.sh@10 -- # set +x 00:12:10.641 21:03:24 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:10.641 21:03:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:10.641 21:03:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:10.641 21:03:24 -- common/autotest_common.sh@10 -- # set +x 00:12:10.641 ************************************ 00:12:10.641 START TEST nvme_rpc_timeouts 00:12:10.641 ************************************ 00:12:10.641 21:03:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:10.900 * Looking for test storage... 00:12:10.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:10.900 21:03:24 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:10.900 21:03:24 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67095 00:12:10.900 21:03:24 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67095 00:12:10.900 21:03:24 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67118 00:12:10.900 21:03:24 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:10.900 21:03:24 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:10.900 21:03:24 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67118 00:12:10.900 21:03:24 -- common/autotest_common.sh@819 -- # '[' -z 67118 ']' 00:12:10.900 21:03:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.900 21:03:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:10.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.900 21:03:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.900 21:03:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:10.900 21:03:24 -- common/autotest_common.sh@10 -- # set +x 00:12:10.900 [2024-07-13 21:03:24.699754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:10.900 [2024-07-13 21:03:24.699938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67118 ] 00:12:11.158 [2024-07-13 21:03:24.874367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:11.159 [2024-07-13 21:03:25.071641] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:11.159 [2024-07-13 21:03:25.072014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.159 [2024-07-13 21:03:25.072025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.533 21:03:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:12.533 Checking default timeout settings: 00:12:12.533 21:03:26 -- common/autotest_common.sh@852 -- # return 0 00:12:12.533 21:03:26 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:12.533 21:03:26 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:13.101 Making settings changes with rpc: 00:12:13.101 21:03:26 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:13.101 21:03:26 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:13.359 Check default vs. modified settings: 00:12:13.359 21:03:27 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:13.359 21:03:27 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67095 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67095 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:13.618 Setting action_on_timeout is changed as expected. 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67095 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67095 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:13.618 Setting timeout_us is changed as expected. 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67095 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67095 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:13.618 Setting timeout_admin_us is changed as expected. 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67095 /tmp/settings_modified_67095 00:12:13.618 21:03:27 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67118 00:12:13.618 21:03:27 -- common/autotest_common.sh@926 -- # '[' -z 67118 ']' 00:12:13.618 21:03:27 -- common/autotest_common.sh@930 -- # kill -0 67118 00:12:13.618 21:03:27 -- common/autotest_common.sh@931 -- # uname 00:12:13.619 21:03:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:13.619 21:03:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67118 00:12:13.619 21:03:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:13.619 killing process with pid 67118 00:12:13.619 21:03:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:13.619 21:03:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67118' 00:12:13.619 21:03:27 -- common/autotest_common.sh@945 -- # kill 67118 00:12:13.619 21:03:27 -- common/autotest_common.sh@950 -- # wait 67118 00:12:16.145 RPC TIMEOUT SETTING TEST PASSED. 00:12:16.145 21:03:29 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:16.145 00:12:16.145 real 0m5.194s 00:12:16.145 user 0m10.355s 00:12:16.145 sys 0m0.600s 00:12:16.145 21:03:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.145 21:03:29 -- common/autotest_common.sh@10 -- # set +x 00:12:16.145 ************************************ 00:12:16.145 END TEST nvme_rpc_timeouts 00:12:16.145 ************************************ 00:12:16.145 21:03:29 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:12:16.145 21:03:29 -- spdk/autotest.sh@255 -- # [[ 1 -eq 1 ]] 00:12:16.145 21:03:29 -- spdk/autotest.sh@256 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:16.145 21:03:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:16.145 21:03:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:16.145 21:03:29 -- common/autotest_common.sh@10 -- # set +x 00:12:16.145 ************************************ 00:12:16.145 START TEST nvme_xnvme 00:12:16.145 ************************************ 00:12:16.145 21:03:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:16.145 * Looking for test storage... 00:12:16.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.145 21:03:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.145 21:03:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.145 21:03:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.145 21:03:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.145 21:03:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.145 21:03:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.145 21:03:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.145 21:03:29 -- paths/export.sh@5 -- # export PATH 00:12:16.145 21:03:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:16.145 21:03:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:16.145 21:03:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:16.145 21:03:29 -- common/autotest_common.sh@10 -- # set +x 00:12:16.145 ************************************ 00:12:16.145 START TEST xnvme_to_malloc_dd_copy 00:12:16.145 ************************************ 00:12:16.145 21:03:29 -- common/autotest_common.sh@1104 -- # malloc_to_xnvme_copy 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:16.145 21:03:29 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:16.145 21:03:29 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:16.145 21:03:29 -- dd/common.sh@191 -- # return 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@18 -- # local io 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:16.145 21:03:29 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:16.146 21:03:29 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:16.146 21:03:29 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:16.146 21:03:29 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:16.146 21:03:29 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:16.146 21:03:29 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:16.146 21:03:29 -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:16.146 21:03:29 -- dd/common.sh@31 -- # xtrace_disable 00:12:16.146 21:03:29 -- common/autotest_common.sh@10 -- # set +x 00:12:16.146 { 00:12:16.146 "subsystems": [ 00:12:16.146 { 00:12:16.146 "subsystem": "bdev", 00:12:16.146 "config": [ 00:12:16.146 { 00:12:16.146 "params": { 00:12:16.146 "block_size": 512, 00:12:16.146 "num_blocks": 2097152, 00:12:16.146 "name": "malloc0" 00:12:16.146 }, 00:12:16.146 "method": "bdev_malloc_create" 00:12:16.146 }, 00:12:16.146 { 00:12:16.146 "params": { 00:12:16.146 "io_mechanism": "libaio", 00:12:16.146 "filename": "/dev/nullb0", 00:12:16.146 "name": "null0" 00:12:16.146 }, 00:12:16.146 "method": "bdev_xnvme_create" 00:12:16.146 }, 00:12:16.146 { 00:12:16.146 "method": "bdev_wait_for_examine" 00:12:16.146 } 00:12:16.146 ] 00:12:16.146 } 00:12:16.146 ] 00:12:16.146 } 00:12:16.146 [2024-07-13 21:03:29.959358] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:16.146 [2024-07-13 21:03:29.959801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67262 ] 00:12:16.403 [2024-07-13 21:03:30.134259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.661 [2024-07-13 21:03:30.331639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.800  Copying: 151/1024 [MB] (151 MBps) Copying: 303/1024 [MB] (152 MBps) Copying: 455/1024 [MB] (152 MBps) Copying: 608/1024 [MB] (152 MBps) Copying: 769/1024 [MB] (161 MBps) Copying: 938/1024 [MB] (168 MBps) Copying: 1024/1024 [MB] (average 156 MBps) 00:12:27.800 00:12:27.800 21:03:41 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:27.800 21:03:41 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:27.800 21:03:41 -- dd/common.sh@31 -- # xtrace_disable 00:12:27.800 21:03:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.800 { 00:12:27.800 "subsystems": [ 00:12:27.800 { 00:12:27.800 "subsystem": "bdev", 00:12:27.800 "config": [ 00:12:27.800 { 00:12:27.800 "params": { 00:12:27.800 "block_size": 512, 00:12:27.800 "num_blocks": 2097152, 00:12:27.800 "name": "malloc0" 00:12:27.800 }, 00:12:27.800 "method": "bdev_malloc_create" 00:12:27.800 }, 00:12:27.800 { 00:12:27.800 "params": { 00:12:27.800 "io_mechanism": "libaio", 00:12:27.800 "filename": "/dev/nullb0", 00:12:27.800 "name": "null0" 00:12:27.800 }, 00:12:27.800 "method": "bdev_xnvme_create" 00:12:27.800 }, 00:12:27.800 { 00:12:27.800 "method": "bdev_wait_for_examine" 00:12:27.800 } 00:12:27.800 ] 00:12:27.800 } 00:12:27.800 ] 00:12:27.800 } 00:12:27.800 [2024-07-13 21:03:41.466874] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:27.800 [2024-07-13 21:03:41.467020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67389 ] 00:12:27.800 [2024-07-13 21:03:41.636136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.058 [2024-07-13 21:03:41.817956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.967  Copying: 182/1024 [MB] (182 MBps) Copying: 369/1024 [MB] (186 MBps) Copying: 558/1024 [MB] (188 MBps) Copying: 747/1024 [MB] (189 MBps) Copying: 936/1024 [MB] (189 MBps) Copying: 1024/1024 [MB] (average 187 MBps) 00:12:37.967 00:12:37.967 21:03:51 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:37.967 21:03:51 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:37.967 21:03:51 -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:37.967 21:03:51 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:37.967 21:03:51 -- dd/common.sh@31 -- # xtrace_disable 00:12:37.967 21:03:51 -- common/autotest_common.sh@10 -- # set +x 00:12:37.967 { 00:12:37.967 "subsystems": [ 00:12:37.967 { 00:12:37.967 "subsystem": "bdev", 00:12:37.967 "config": [ 00:12:37.967 { 00:12:37.967 "params": { 00:12:37.967 "block_size": 512, 00:12:37.967 "num_blocks": 2097152, 00:12:37.967 "name": "malloc0" 00:12:37.967 }, 00:12:37.967 "method": "bdev_malloc_create" 00:12:37.967 }, 00:12:37.967 { 00:12:37.967 "params": { 00:12:37.967 "io_mechanism": "io_uring", 00:12:37.967 "filename": "/dev/nullb0", 00:12:37.967 "name": "null0" 00:12:37.967 }, 00:12:37.967 "method": "bdev_xnvme_create" 00:12:37.967 }, 00:12:37.967 { 00:12:37.967 "method": "bdev_wait_for_examine" 00:12:37.967 } 00:12:37.967 ] 00:12:37.967 } 00:12:37.967 ] 00:12:37.967 } 00:12:37.967 [2024-07-13 21:03:51.818619] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:37.967 [2024-07-13 21:03:51.818770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67504 ] 00:12:38.225 [2024-07-13 21:03:51.987613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.483 [2024-07-13 21:03:52.157857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.117  Copying: 194/1024 [MB] (194 MBps) Copying: 388/1024 [MB] (194 MBps) Copying: 584/1024 [MB] (195 MBps) Copying: 780/1024 [MB] (195 MBps) Copying: 975/1024 [MB] (194 MBps) Copying: 1024/1024 [MB] (average 195 MBps) 00:12:48.117 00:12:48.117 21:04:01 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:48.117 21:04:01 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:48.117 21:04:01 -- dd/common.sh@31 -- # xtrace_disable 00:12:48.117 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:12:48.117 { 00:12:48.117 "subsystems": [ 00:12:48.117 { 00:12:48.117 "subsystem": "bdev", 00:12:48.117 "config": [ 00:12:48.117 { 00:12:48.117 "params": { 00:12:48.117 "block_size": 512, 00:12:48.117 "num_blocks": 2097152, 00:12:48.117 "name": "malloc0" 00:12:48.117 }, 00:12:48.117 "method": "bdev_malloc_create" 00:12:48.117 }, 00:12:48.117 { 00:12:48.117 "params": { 00:12:48.117 "io_mechanism": "io_uring", 00:12:48.117 "filename": "/dev/nullb0", 00:12:48.117 "name": "null0" 00:12:48.117 }, 00:12:48.117 "method": "bdev_xnvme_create" 00:12:48.117 }, 00:12:48.117 { 00:12:48.117 "method": "bdev_wait_for_examine" 00:12:48.117 } 00:12:48.117 ] 00:12:48.117 } 00:12:48.117 ] 00:12:48.117 } 00:12:48.117 [2024-07-13 21:04:01.833138] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:48.117 [2024-07-13 21:04:01.833288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67619 ] 00:12:48.117 [2024-07-13 21:04:02.002661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.376 [2024-07-13 21:04:02.175021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.075  Copying: 203/1024 [MB] (203 MBps) Copying: 408/1024 [MB] (204 MBps) Copying: 612/1024 [MB] (204 MBps) Copying: 817/1024 [MB] (205 MBps) Copying: 1023/1024 [MB] (205 MBps) Copying: 1024/1024 [MB] (average 204 MBps) 00:12:58.075 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:58.075 21:04:11 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:58.075 ************************************ 00:12:58.075 END TEST xnvme_to_malloc_dd_copy 00:12:58.075 ************************************ 00:12:58.075 00:12:58.075 real 0m41.707s 00:12:58.075 user 0m36.357s 00:12:58.075 sys 0m4.747s 00:12:58.075 21:04:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.075 21:04:11 -- common/autotest_common.sh@10 -- # set +x 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:58.075 21:04:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:58.075 21:04:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:58.075 21:04:11 -- common/autotest_common.sh@10 -- # set +x 00:12:58.075 ************************************ 00:12:58.075 START TEST xnvme_bdevperf 00:12:58.075 ************************************ 00:12:58.075 21:04:11 -- common/autotest_common.sh@1104 -- # xnvme_bdevperf 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:58.075 21:04:11 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:58.075 21:04:11 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:58.075 21:04:11 -- dd/common.sh@191 -- # return 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@60 -- # local io 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:58.075 21:04:11 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:58.075 21:04:11 -- dd/common.sh@31 -- # xtrace_disable 00:12:58.075 21:04:11 -- common/autotest_common.sh@10 -- # set +x 00:12:58.075 { 00:12:58.075 "subsystems": [ 00:12:58.075 { 00:12:58.075 "subsystem": "bdev", 00:12:58.075 "config": [ 00:12:58.075 { 00:12:58.075 "params": { 00:12:58.075 "io_mechanism": "libaio", 00:12:58.075 "filename": "/dev/nullb0", 00:12:58.075 "name": "null0" 00:12:58.075 }, 00:12:58.075 "method": "bdev_xnvme_create" 00:12:58.075 }, 00:12:58.075 { 00:12:58.075 "method": "bdev_wait_for_examine" 00:12:58.075 } 00:12:58.075 ] 00:12:58.075 } 00:12:58.075 ] 00:12:58.075 } 00:12:58.075 [2024-07-13 21:04:11.725063] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:58.075 [2024-07-13 21:04:11.725226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67756 ] 00:12:58.075 [2024-07-13 21:04:11.895665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.333 [2024-07-13 21:04:12.062708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.592 Running I/O for 5 seconds... 00:13:03.854 00:13:03.854 Latency(us) 00:13:03.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.854 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:03.854 null0 : 5.00 115667.20 451.82 0.00 0.00 550.11 156.39 1050.07 00:13:03.854 =================================================================================================================== 00:13:03.854 Total : 115667.20 451.82 0.00 0.00 550.11 156.39 1050.07 00:13:04.788 21:04:18 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:04.789 21:04:18 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:04.789 21:04:18 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:04.789 21:04:18 -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:04.789 21:04:18 -- dd/common.sh@31 -- # xtrace_disable 00:13:04.789 21:04:18 -- common/autotest_common.sh@10 -- # set +x 00:13:04.789 { 00:13:04.789 "subsystems": [ 00:13:04.789 { 00:13:04.789 "subsystem": "bdev", 00:13:04.789 "config": [ 00:13:04.789 { 00:13:04.789 "params": { 00:13:04.789 "io_mechanism": "io_uring", 00:13:04.789 "filename": "/dev/nullb0", 00:13:04.789 "name": "null0" 00:13:04.789 }, 00:13:04.789 "method": "bdev_xnvme_create" 00:13:04.789 }, 00:13:04.789 { 00:13:04.789 "method": "bdev_wait_for_examine" 00:13:04.789 } 00:13:04.789 ] 00:13:04.789 } 00:13:04.789 ] 00:13:04.789 } 00:13:04.789 [2024-07-13 21:04:18.457081] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:04.789 [2024-07-13 21:04:18.457234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67830 ] 00:13:04.789 [2024-07-13 21:04:18.626638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.047 [2024-07-13 21:04:18.794348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.305 Running I/O for 5 seconds... 00:13:10.570 00:13:10.570 Latency(us) 00:13:10.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.571 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:10.571 null0 : 5.00 169379.00 661.64 0.00 0.00 374.97 251.35 640.47 00:13:10.571 =================================================================================================================== 00:13:10.571 Total : 169379.00 661.64 0.00 0.00 374.97 251.35 640.47 00:13:11.138 21:04:25 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:11.396 21:04:25 -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:11.396 00:13:11.396 real 0m13.487s 00:13:11.396 user 0m10.277s 00:13:11.396 sys 0m2.995s 00:13:11.396 21:04:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.396 ************************************ 00:13:11.396 END TEST xnvme_bdevperf 00:13:11.396 ************************************ 00:13:11.396 21:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:11.396 ************************************ 00:13:11.396 END TEST nvme_xnvme 00:13:11.396 ************************************ 00:13:11.396 00:13:11.396 real 0m55.387s 00:13:11.396 user 0m46.711s 00:13:11.396 sys 0m7.846s 00:13:11.396 21:04:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.396 21:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:11.397 21:04:25 -- spdk/autotest.sh@257 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:11.397 21:04:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:11.397 21:04:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:11.397 21:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:11.397 ************************************ 00:13:11.397 START TEST blockdev_xnvme 00:13:11.397 ************************************ 00:13:11.397 21:04:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:11.397 * Looking for test storage... 00:13:11.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:11.397 21:04:25 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:11.397 21:04:25 -- bdev/nbd_common.sh@6 -- # set -e 00:13:11.397 21:04:25 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:11.397 21:04:25 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:11.397 21:04:25 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:11.397 21:04:25 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:11.397 21:04:25 -- bdev/blockdev.sh@18 -- # : 00:13:11.397 21:04:25 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:13:11.397 21:04:25 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:13:11.397 21:04:25 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:13:11.397 21:04:25 -- bdev/blockdev.sh@672 -- # uname -s 00:13:11.397 21:04:25 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:13:11.397 21:04:25 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:13:11.397 21:04:25 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:13:11.397 21:04:25 -- bdev/blockdev.sh@681 -- # crypto_device= 00:13:11.397 21:04:25 -- bdev/blockdev.sh@682 -- # dek= 00:13:11.397 21:04:25 -- bdev/blockdev.sh@683 -- # env_ctx= 00:13:11.397 21:04:25 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:13:11.397 21:04:25 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:13:11.397 21:04:25 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:13:11.397 21:04:25 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:13:11.397 21:04:25 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:13:11.397 21:04:25 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67969 00:13:11.397 21:04:25 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:11.397 21:04:25 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:11.397 21:04:25 -- bdev/blockdev.sh@47 -- # waitforlisten 67969 00:13:11.397 21:04:25 -- common/autotest_common.sh@819 -- # '[' -z 67969 ']' 00:13:11.397 21:04:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.397 21:04:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:11.397 21:04:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.397 21:04:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:11.397 21:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:11.656 [2024-07-13 21:04:25.388653] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:11.656 [2024-07-13 21:04:25.389109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67969 ] 00:13:11.656 [2024-07-13 21:04:25.555812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.915 [2024-07-13 21:04:25.729811] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:11.915 [2024-07-13 21:04:25.730093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.292 21:04:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:13.292 21:04:27 -- common/autotest_common.sh@852 -- # return 0 00:13:13.292 21:04:27 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:13:13.292 21:04:27 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:13:13.292 21:04:27 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:13:13.292 21:04:27 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:13:13.292 21:04:27 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:13.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:13.859 Waiting for block devices as requested 00:13:13.859 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:13:13.859 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.117 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.117 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:13:19.381 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:13:19.381 21:04:32 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:13:19.381 21:04:32 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:13:19.381 21:04:32 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:13:19.381 21:04:32 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:13:19.381 21:04:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:19.381 21:04:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:13:19.381 21:04:32 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:13:19.381 21:04:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:13:19.381 21:04:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:19.381 21:04:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:19.381 21:04:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:13:19.381 21:04:32 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:13:19.381 21:04:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:19.381 21:04:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:19.381 21:04:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:19.381 21:04:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:13:19.381 21:04:32 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:13:19.381 21:04:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:19.381 21:04:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:19.381 21:04:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:19.381 21:04:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:13:19.381 21:04:32 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:13:19.381 21:04:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:13:19.381 21:04:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:19.381 21:04:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:19.381 21:04:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:13:19.381 21:04:32 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:13:19.381 21:04:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:13:19.381 21:04:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:19.381 21:04:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:19.381 21:04:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:13:19.381 21:04:32 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:13:19.381 21:04:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:19.381 21:04:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:19.382 21:04:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:19.382 21:04:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:13:19.382 21:04:32 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:13:19.382 21:04:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:19.382 21:04:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:19.382 21:04:32 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:19.382 21:04:32 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:19.382 21:04:32 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:19.382 21:04:32 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:19.382 21:04:32 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:19.382 21:04:32 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:19.382 21:04:32 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:19.382 21:04:32 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:19.382 21:04:32 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:19.382 21:04:32 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:19.382 21:04:32 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:19.382 21:04:32 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:19.382 21:04:32 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:13:19.382 21:04:32 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:13:19.382 21:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.382 21:04:32 -- common/autotest_common.sh@10 -- # set +x 00:13:19.382 21:04:32 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:19.382 nvme0n1 00:13:19.382 nvme1n1 00:13:19.382 nvme1n2 00:13:19.382 nvme1n3 00:13:19.382 nvme2n1 00:13:19.382 nvme3n1 00:13:19.382 21:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.382 21:04:33 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:13:19.382 21:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.382 21:04:33 -- common/autotest_common.sh@10 -- # set +x 00:13:19.382 21:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.382 21:04:33 -- bdev/blockdev.sh@738 -- # cat 00:13:19.382 21:04:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:13:19.382 21:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.382 21:04:33 -- common/autotest_common.sh@10 -- # set +x 00:13:19.382 21:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.382 21:04:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:13:19.382 21:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.382 21:04:33 -- common/autotest_common.sh@10 -- # set +x 00:13:19.382 21:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.382 21:04:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:19.382 21:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.382 21:04:33 -- common/autotest_common.sh@10 -- # set +x 00:13:19.382 21:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.382 21:04:33 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:13:19.382 21:04:33 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:13:19.382 21:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.382 21:04:33 -- common/autotest_common.sh@10 -- # set +x 00:13:19.382 21:04:33 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:13:19.382 21:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.382 21:04:33 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:13:19.382 21:04:33 -- bdev/blockdev.sh@747 -- # jq -r .name 00:13:19.382 21:04:33 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "676c2454-654c-4eff-83d5-d4b776688e87"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "676c2454-654c-4eff-83d5-d4b776688e87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "47a510ef-719f-471c-ba3f-c4594c2151b9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "47a510ef-719f-471c-ba3f-c4594c2151b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "2765714a-7fb7-4f66-a91d-731b4357e18e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2765714a-7fb7-4f66-a91d-731b4357e18e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "218ba5ff-30b5-4e84-ab17-c2a92491b27a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "218ba5ff-30b5-4e84-ab17-c2a92491b27a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a6ec0977-8688-4ed6-b3a4-7bf922634f19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a6ec0977-8688-4ed6-b3a4-7bf922634f19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "200a7db0-1756-464e-bf23-7bd8db50d6b0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "200a7db0-1756-464e-bf23-7bd8db50d6b0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:19.382 21:04:33 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:13:19.382 21:04:33 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:13:19.382 21:04:33 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:13:19.382 21:04:33 -- bdev/blockdev.sh@752 -- # killprocess 67969 00:13:19.382 21:04:33 -- common/autotest_common.sh@926 -- # '[' -z 67969 ']' 00:13:19.382 21:04:33 -- common/autotest_common.sh@930 -- # kill -0 67969 00:13:19.382 21:04:33 -- common/autotest_common.sh@931 -- # uname 00:13:19.382 21:04:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:19.382 21:04:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67969 00:13:19.382 killing process with pid 67969 00:13:19.382 21:04:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:19.382 21:04:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:19.382 21:04:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67969' 00:13:19.382 21:04:33 -- common/autotest_common.sh@945 -- # kill 67969 00:13:19.382 21:04:33 -- common/autotest_common.sh@950 -- # wait 67969 00:13:21.284 21:04:35 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:21.284 21:04:35 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:21.284 21:04:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:13:21.284 21:04:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:21.284 21:04:35 -- common/autotest_common.sh@10 -- # set +x 00:13:21.284 ************************************ 00:13:21.284 START TEST bdev_hello_world 00:13:21.284 ************************************ 00:13:21.284 21:04:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:21.542 [2024-07-13 21:04:35.234541] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:21.542 [2024-07-13 21:04:35.234689] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68368 ] 00:13:21.542 [2024-07-13 21:04:35.390539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.800 [2024-07-13 21:04:35.560222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.058 [2024-07-13 21:04:35.929213] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:22.058 [2024-07-13 21:04:35.929289] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:22.058 [2024-07-13 21:04:35.929329] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:22.058 [2024-07-13 21:04:35.931491] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:22.058 [2024-07-13 21:04:35.931902] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:22.058 [2024-07-13 21:04:35.931935] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:22.058 [2024-07-13 21:04:35.932190] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:22.058 00:13:22.058 [2024-07-13 21:04:35.932220] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:23.431 00:13:23.432 real 0m1.775s 00:13:23.432 user 0m1.481s 00:13:23.432 sys 0m0.180s 00:13:23.432 21:04:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.432 ************************************ 00:13:23.432 END TEST bdev_hello_world 00:13:23.432 ************************************ 00:13:23.432 21:04:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.432 21:04:36 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:13:23.432 21:04:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:23.432 21:04:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:23.432 21:04:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.432 ************************************ 00:13:23.432 START TEST bdev_bounds 00:13:23.432 ************************************ 00:13:23.432 21:04:36 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:13:23.432 21:04:36 -- bdev/blockdev.sh@288 -- # bdevio_pid=68405 00:13:23.432 Process bdevio pid: 68405 00:13:23.432 21:04:36 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:23.432 21:04:36 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 68405' 00:13:23.432 21:04:36 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:23.432 21:04:36 -- bdev/blockdev.sh@291 -- # waitforlisten 68405 00:13:23.432 21:04:36 -- common/autotest_common.sh@819 -- # '[' -z 68405 ']' 00:13:23.432 21:04:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.432 21:04:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:23.432 21:04:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.432 21:04:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:23.432 21:04:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.432 [2024-07-13 21:04:37.074391] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:23.432 [2024-07-13 21:04:37.074815] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68405 ] 00:13:23.432 [2024-07-13 21:04:37.243359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:23.690 [2024-07-13 21:04:37.419972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.690 [2024-07-13 21:04:37.420093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.690 [2024-07-13 21:04:37.420104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.255 21:04:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:24.255 21:04:37 -- common/autotest_common.sh@852 -- # return 0 00:13:24.255 21:04:37 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:24.255 I/O targets: 00:13:24.255 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:24.255 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:24.255 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:24.255 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:24.255 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:24.255 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:24.256 00:13:24.256 00:13:24.256 CUnit - A unit testing framework for C - Version 2.1-3 00:13:24.256 http://cunit.sourceforge.net/ 00:13:24.256 00:13:24.256 00:13:24.256 Suite: bdevio tests on: nvme3n1 00:13:24.256 Test: blockdev write read block ...passed 00:13:24.256 Test: blockdev write zeroes read block ...passed 00:13:24.256 Test: blockdev write zeroes read no split ...passed 00:13:24.256 Test: blockdev write zeroes read split ...passed 00:13:24.256 Test: blockdev write zeroes read split partial ...passed 00:13:24.256 Test: blockdev reset ...passed 00:13:24.256 Test: blockdev write read 8 blocks ...passed 00:13:24.256 Test: blockdev write read size > 128k ...passed 00:13:24.256 Test: blockdev write read invalid size ...passed 00:13:24.256 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:24.256 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:24.256 Test: blockdev write read max offset ...passed 00:13:24.256 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:24.256 Test: blockdev writev readv 8 blocks ...passed 00:13:24.256 Test: blockdev writev readv 30 x 1block ...passed 00:13:24.256 Test: blockdev writev readv block ...passed 00:13:24.256 Test: blockdev writev readv size > 128k ...passed 00:13:24.256 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:24.256 Test: blockdev comparev and writev ...passed 00:13:24.256 Test: blockdev nvme passthru rw ...passed 00:13:24.256 Test: blockdev nvme passthru vendor specific ...passed 00:13:24.256 Test: blockdev nvme admin passthru ...passed 00:13:24.256 Test: blockdev copy ...passed 00:13:24.256 Suite: bdevio tests on: nvme2n1 00:13:24.256 Test: blockdev write read block ...passed 00:13:24.256 Test: blockdev write zeroes read block ...passed 00:13:24.256 Test: blockdev write zeroes read no split ...passed 00:13:24.514 Test: blockdev write zeroes read split ...passed 00:13:24.514 Test: blockdev write zeroes read split partial ...passed 00:13:24.514 Test: blockdev reset ...passed 00:13:24.514 Test: blockdev write read 8 blocks ...passed 00:13:24.514 Test: blockdev write read size > 128k ...passed 00:13:24.514 Test: blockdev write read invalid size ...passed 00:13:24.514 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:24.514 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:24.514 Test: blockdev write read max offset ...passed 00:13:24.514 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:24.514 Test: blockdev writev readv 8 blocks ...passed 00:13:24.514 Test: blockdev writev readv 30 x 1block ...passed 00:13:24.514 Test: blockdev writev readv block ...passed 00:13:24.514 Test: blockdev writev readv size > 128k ...passed 00:13:24.514 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:24.514 Test: blockdev comparev and writev ...passed 00:13:24.514 Test: blockdev nvme passthru rw ...passed 00:13:24.514 Test: blockdev nvme passthru vendor specific ...passed 00:13:24.514 Test: blockdev nvme admin passthru ...passed 00:13:24.514 Test: blockdev copy ...passed 00:13:24.514 Suite: bdevio tests on: nvme1n3 00:13:24.514 Test: blockdev write read block ...passed 00:13:24.514 Test: blockdev write zeroes read block ...passed 00:13:24.514 Test: blockdev write zeroes read no split ...passed 00:13:24.514 Test: blockdev write zeroes read split ...passed 00:13:24.514 Test: blockdev write zeroes read split partial ...passed 00:13:24.514 Test: blockdev reset ...passed 00:13:24.514 Test: blockdev write read 8 blocks ...passed 00:13:24.514 Test: blockdev write read size > 128k ...passed 00:13:24.514 Test: blockdev write read invalid size ...passed 00:13:24.514 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:24.514 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:24.514 Test: blockdev write read max offset ...passed 00:13:24.514 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:24.514 Test: blockdev writev readv 8 blocks ...passed 00:13:24.514 Test: blockdev writev readv 30 x 1block ...passed 00:13:24.514 Test: blockdev writev readv block ...passed 00:13:24.514 Test: blockdev writev readv size > 128k ...passed 00:13:24.514 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:24.514 Test: blockdev comparev and writev ...passed 00:13:24.514 Test: blockdev nvme passthru rw ...passed 00:13:24.514 Test: blockdev nvme passthru vendor specific ...passed 00:13:24.514 Test: blockdev nvme admin passthru ...passed 00:13:24.514 Test: blockdev copy ...passed 00:13:24.514 Suite: bdevio tests on: nvme1n2 00:13:24.514 Test: blockdev write read block ...passed 00:13:24.514 Test: blockdev write zeroes read block ...passed 00:13:24.514 Test: blockdev write zeroes read no split ...passed 00:13:24.514 Test: blockdev write zeroes read split ...passed 00:13:24.514 Test: blockdev write zeroes read split partial ...passed 00:13:24.514 Test: blockdev reset ...passed 00:13:24.514 Test: blockdev write read 8 blocks ...passed 00:13:24.514 Test: blockdev write read size > 128k ...passed 00:13:24.514 Test: blockdev write read invalid size ...passed 00:13:24.514 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:24.514 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:24.514 Test: blockdev write read max offset ...passed 00:13:24.514 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:24.514 Test: blockdev writev readv 8 blocks ...passed 00:13:24.514 Test: blockdev writev readv 30 x 1block ...passed 00:13:24.514 Test: blockdev writev readv block ...passed 00:13:24.514 Test: blockdev writev readv size > 128k ...passed 00:13:24.514 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:24.514 Test: blockdev comparev and writev ...passed 00:13:24.514 Test: blockdev nvme passthru rw ...passed 00:13:24.514 Test: blockdev nvme passthru vendor specific ...passed 00:13:24.514 Test: blockdev nvme admin passthru ...passed 00:13:24.514 Test: blockdev copy ...passed 00:13:24.514 Suite: bdevio tests on: nvme1n1 00:13:24.514 Test: blockdev write read block ...passed 00:13:24.514 Test: blockdev write zeroes read block ...passed 00:13:24.514 Test: blockdev write zeroes read no split ...passed 00:13:24.514 Test: blockdev write zeroes read split ...passed 00:13:24.514 Test: blockdev write zeroes read split partial ...passed 00:13:24.514 Test: blockdev reset ...passed 00:13:24.514 Test: blockdev write read 8 blocks ...passed 00:13:24.514 Test: blockdev write read size > 128k ...passed 00:13:24.514 Test: blockdev write read invalid size ...passed 00:13:24.514 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:24.514 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:24.514 Test: blockdev write read max offset ...passed 00:13:24.515 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:24.515 Test: blockdev writev readv 8 blocks ...passed 00:13:24.515 Test: blockdev writev readv 30 x 1block ...passed 00:13:24.515 Test: blockdev writev readv block ...passed 00:13:24.773 Test: blockdev writev readv size > 128k ...passed 00:13:24.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:24.773 Test: blockdev comparev and writev ...passed 00:13:24.773 Test: blockdev nvme passthru rw ...passed 00:13:24.773 Test: blockdev nvme passthru vendor specific ...passed 00:13:24.773 Test: blockdev nvme admin passthru ...passed 00:13:24.773 Test: blockdev copy ...passed 00:13:24.773 Suite: bdevio tests on: nvme0n1 00:13:24.773 Test: blockdev write read block ...passed 00:13:24.773 Test: blockdev write zeroes read block ...passed 00:13:24.773 Test: blockdev write zeroes read no split ...passed 00:13:24.773 Test: blockdev write zeroes read split ...passed 00:13:24.773 Test: blockdev write zeroes read split partial ...passed 00:13:24.773 Test: blockdev reset ...passed 00:13:24.773 Test: blockdev write read 8 blocks ...passed 00:13:24.773 Test: blockdev write read size > 128k ...passed 00:13:24.773 Test: blockdev write read invalid size ...passed 00:13:24.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:24.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:24.773 Test: blockdev write read max offset ...passed 00:13:24.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:24.773 Test: blockdev writev readv 8 blocks ...passed 00:13:24.773 Test: blockdev writev readv 30 x 1block ...passed 00:13:24.773 Test: blockdev writev readv block ...passed 00:13:24.773 Test: blockdev writev readv size > 128k ...passed 00:13:24.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:24.773 Test: blockdev comparev and writev ...passed 00:13:24.773 Test: blockdev nvme passthru rw ...passed 00:13:24.773 Test: blockdev nvme passthru vendor specific ...passed 00:13:24.773 Test: blockdev nvme admin passthru ...passed 00:13:24.773 Test: blockdev copy ...passed 00:13:24.773 00:13:24.773 Run Summary: Type Total Ran Passed Failed Inactive 00:13:24.773 suites 6 6 n/a 0 0 00:13:24.773 tests 138 138 138 0 0 00:13:24.773 asserts 780 780 780 0 n/a 00:13:24.773 00:13:24.773 Elapsed time = 1.204 seconds 00:13:24.773 0 00:13:24.773 21:04:38 -- bdev/blockdev.sh@293 -- # killprocess 68405 00:13:24.773 21:04:38 -- common/autotest_common.sh@926 -- # '[' -z 68405 ']' 00:13:24.773 21:04:38 -- common/autotest_common.sh@930 -- # kill -0 68405 00:13:24.773 21:04:38 -- common/autotest_common.sh@931 -- # uname 00:13:24.773 21:04:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:24.773 21:04:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68405 00:13:24.773 21:04:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:24.773 21:04:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:24.773 21:04:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68405' 00:13:24.773 killing process with pid 68405 00:13:24.773 21:04:38 -- common/autotest_common.sh@945 -- # kill 68405 00:13:24.773 21:04:38 -- common/autotest_common.sh@950 -- # wait 68405 00:13:25.708 21:04:39 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:13:25.708 00:13:25.708 real 0m2.592s 00:13:25.708 user 0m6.212s 00:13:25.708 sys 0m0.343s 00:13:25.708 21:04:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.708 21:04:39 -- common/autotest_common.sh@10 -- # set +x 00:13:25.708 ************************************ 00:13:25.708 END TEST bdev_bounds 00:13:25.708 ************************************ 00:13:25.708 21:04:39 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:13:25.708 21:04:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:25.708 21:04:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:25.708 21:04:39 -- common/autotest_common.sh@10 -- # set +x 00:13:25.708 ************************************ 00:13:25.708 START TEST bdev_nbd 00:13:25.708 ************************************ 00:13:25.708 21:04:39 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:13:25.708 21:04:39 -- bdev/blockdev.sh@298 -- # uname -s 00:13:25.967 21:04:39 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:13:25.967 21:04:39 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.967 21:04:39 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:25.967 21:04:39 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:25.967 21:04:39 -- bdev/blockdev.sh@302 -- # local bdev_all 00:13:25.967 21:04:39 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:13:25.967 21:04:39 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:13:25.967 21:04:39 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:25.967 21:04:39 -- bdev/blockdev.sh@309 -- # local nbd_all 00:13:25.967 21:04:39 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:13:25.967 21:04:39 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:25.967 21:04:39 -- bdev/blockdev.sh@312 -- # local nbd_list 00:13:25.967 21:04:39 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:25.967 21:04:39 -- bdev/blockdev.sh@313 -- # local bdev_list 00:13:25.967 21:04:39 -- bdev/blockdev.sh@316 -- # nbd_pid=68464 00:13:25.967 21:04:39 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:25.967 21:04:39 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:25.967 21:04:39 -- bdev/blockdev.sh@318 -- # waitforlisten 68464 /var/tmp/spdk-nbd.sock 00:13:25.967 21:04:39 -- common/autotest_common.sh@819 -- # '[' -z 68464 ']' 00:13:25.967 21:04:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:25.967 21:04:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:25.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:25.967 21:04:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:25.967 21:04:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:25.967 21:04:39 -- common/autotest_common.sh@10 -- # set +x 00:13:25.967 [2024-07-13 21:04:39.713925] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:25.967 [2024-07-13 21:04:39.714064] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.967 [2024-07-13 21:04:39.869020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.225 [2024-07-13 21:04:40.039509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.792 21:04:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:26.792 21:04:40 -- common/autotest_common.sh@852 -- # return 0 00:13:26.792 21:04:40 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:13:26.792 21:04:40 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.792 21:04:40 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:26.792 21:04:40 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:26.792 21:04:40 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:13:26.792 21:04:40 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.792 21:04:40 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:26.792 21:04:40 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:26.792 21:04:40 -- bdev/nbd_common.sh@24 -- # local i 00:13:26.792 21:04:40 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:26.792 21:04:40 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:26.792 21:04:40 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:26.792 21:04:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:27.050 21:04:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:27.050 21:04:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:27.050 21:04:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:27.050 21:04:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:27.050 21:04:40 -- common/autotest_common.sh@857 -- # local i 00:13:27.050 21:04:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:27.050 21:04:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:27.050 21:04:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:27.050 21:04:40 -- common/autotest_common.sh@861 -- # break 00:13:27.050 21:04:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:27.050 21:04:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:27.050 21:04:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.050 1+0 records in 00:13:27.050 1+0 records out 00:13:27.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643321 s, 6.4 MB/s 00:13:27.050 21:04:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.050 21:04:40 -- common/autotest_common.sh@874 -- # size=4096 00:13:27.050 21:04:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.050 21:04:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:27.050 21:04:40 -- common/autotest_common.sh@877 -- # return 0 00:13:27.050 21:04:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:27.050 21:04:40 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:27.051 21:04:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:27.309 21:04:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:27.309 21:04:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:27.309 21:04:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:27.309 21:04:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:13:27.309 21:04:41 -- common/autotest_common.sh@857 -- # local i 00:13:27.309 21:04:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:27.309 21:04:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:27.309 21:04:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:13:27.309 21:04:41 -- common/autotest_common.sh@861 -- # break 00:13:27.309 21:04:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:27.309 21:04:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:27.309 21:04:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.309 1+0 records in 00:13:27.309 1+0 records out 00:13:27.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511001 s, 8.0 MB/s 00:13:27.309 21:04:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.309 21:04:41 -- common/autotest_common.sh@874 -- # size=4096 00:13:27.309 21:04:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.309 21:04:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:27.309 21:04:41 -- common/autotest_common.sh@877 -- # return 0 00:13:27.309 21:04:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:27.309 21:04:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:27.309 21:04:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:13:27.568 21:04:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:27.568 21:04:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:27.568 21:04:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:27.568 21:04:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:13:27.568 21:04:41 -- common/autotest_common.sh@857 -- # local i 00:13:27.568 21:04:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:27.568 21:04:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:27.568 21:04:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:13:27.568 21:04:41 -- common/autotest_common.sh@861 -- # break 00:13:27.568 21:04:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:27.568 21:04:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:27.568 21:04:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.568 1+0 records in 00:13:27.568 1+0 records out 00:13:27.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663021 s, 6.2 MB/s 00:13:27.568 21:04:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.568 21:04:41 -- common/autotest_common.sh@874 -- # size=4096 00:13:27.568 21:04:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.568 21:04:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:27.568 21:04:41 -- common/autotest_common.sh@877 -- # return 0 00:13:27.568 21:04:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:27.568 21:04:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:27.568 21:04:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:13:27.826 21:04:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:27.826 21:04:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:27.826 21:04:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:27.826 21:04:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:13:27.826 21:04:41 -- common/autotest_common.sh@857 -- # local i 00:13:27.826 21:04:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:27.826 21:04:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:27.826 21:04:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:13:27.826 21:04:41 -- common/autotest_common.sh@861 -- # break 00:13:27.826 21:04:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:27.826 21:04:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:27.826 21:04:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.826 1+0 records in 00:13:27.826 1+0 records out 00:13:27.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108923 s, 3.8 MB/s 00:13:27.826 21:04:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.826 21:04:41 -- common/autotest_common.sh@874 -- # size=4096 00:13:27.826 21:04:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.826 21:04:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:27.826 21:04:41 -- common/autotest_common.sh@877 -- # return 0 00:13:27.827 21:04:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:27.827 21:04:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:27.827 21:04:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:28.085 21:04:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:28.085 21:04:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:28.085 21:04:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:28.085 21:04:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:13:28.085 21:04:41 -- common/autotest_common.sh@857 -- # local i 00:13:28.085 21:04:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:28.085 21:04:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:28.085 21:04:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:13:28.085 21:04:41 -- common/autotest_common.sh@861 -- # break 00:13:28.085 21:04:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:28.085 21:04:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:28.085 21:04:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.085 1+0 records in 00:13:28.085 1+0 records out 00:13:28.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00159491 s, 2.6 MB/s 00:13:28.085 21:04:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.085 21:04:41 -- common/autotest_common.sh@874 -- # size=4096 00:13:28.085 21:04:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.085 21:04:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:28.085 21:04:41 -- common/autotest_common.sh@877 -- # return 0 00:13:28.085 21:04:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:28.085 21:04:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:28.085 21:04:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:28.653 21:04:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:28.653 21:04:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:28.653 21:04:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:28.653 21:04:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:13:28.653 21:04:42 -- common/autotest_common.sh@857 -- # local i 00:13:28.653 21:04:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:28.653 21:04:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:28.653 21:04:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:13:28.653 21:04:42 -- common/autotest_common.sh@861 -- # break 00:13:28.653 21:04:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:28.653 21:04:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:28.653 21:04:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.653 1+0 records in 00:13:28.653 1+0 records out 00:13:28.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787182 s, 5.2 MB/s 00:13:28.653 21:04:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.653 21:04:42 -- common/autotest_common.sh@874 -- # size=4096 00:13:28.653 21:04:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.653 21:04:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:28.653 21:04:42 -- common/autotest_common.sh@877 -- # return 0 00:13:28.653 21:04:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:28.653 21:04:42 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:28.653 21:04:42 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:28.653 21:04:42 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:28.653 { 00:13:28.653 "nbd_device": "/dev/nbd0", 00:13:28.653 "bdev_name": "nvme0n1" 00:13:28.653 }, 00:13:28.653 { 00:13:28.653 "nbd_device": "/dev/nbd1", 00:13:28.653 "bdev_name": "nvme1n1" 00:13:28.653 }, 00:13:28.653 { 00:13:28.653 "nbd_device": "/dev/nbd2", 00:13:28.653 "bdev_name": "nvme1n2" 00:13:28.653 }, 00:13:28.653 { 00:13:28.653 "nbd_device": "/dev/nbd3", 00:13:28.653 "bdev_name": "nvme1n3" 00:13:28.653 }, 00:13:28.653 { 00:13:28.653 "nbd_device": "/dev/nbd4", 00:13:28.653 "bdev_name": "nvme2n1" 00:13:28.653 }, 00:13:28.653 { 00:13:28.653 "nbd_device": "/dev/nbd5", 00:13:28.653 "bdev_name": "nvme3n1" 00:13:28.653 } 00:13:28.653 ]' 00:13:28.653 21:04:42 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:28.653 21:04:42 -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:28.653 { 00:13:28.653 "nbd_device": "/dev/nbd0", 00:13:28.653 "bdev_name": "nvme0n1" 00:13:28.653 }, 00:13:28.653 { 00:13:28.653 "nbd_device": "/dev/nbd1", 00:13:28.653 "bdev_name": "nvme1n1" 00:13:28.653 }, 00:13:28.653 { 00:13:28.653 "nbd_device": "/dev/nbd2", 00:13:28.653 "bdev_name": "nvme1n2" 00:13:28.653 }, 00:13:28.653 { 00:13:28.653 "nbd_device": "/dev/nbd3", 00:13:28.653 "bdev_name": "nvme1n3" 00:13:28.653 }, 00:13:28.653 { 00:13:28.653 "nbd_device": "/dev/nbd4", 00:13:28.653 "bdev_name": "nvme2n1" 00:13:28.653 }, 00:13:28.653 { 00:13:28.653 "nbd_device": "/dev/nbd5", 00:13:28.653 "bdev_name": "nvme3n1" 00:13:28.653 } 00:13:28.653 ]' 00:13:28.653 21:04:42 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:28.913 21:04:42 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:28.913 21:04:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:28.913 21:04:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:28.913 21:04:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:28.913 21:04:42 -- bdev/nbd_common.sh@51 -- # local i 00:13:28.913 21:04:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.913 21:04:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:29.172 21:04:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:29.172 21:04:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:29.172 21:04:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:29.172 21:04:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.172 21:04:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.172 21:04:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:29.172 21:04:42 -- bdev/nbd_common.sh@41 -- # break 00:13:29.172 21:04:42 -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.172 21:04:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.172 21:04:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:29.172 21:04:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:29.172 21:04:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:29.172 21:04:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:29.172 21:04:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.172 21:04:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.172 21:04:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:29.172 21:04:43 -- bdev/nbd_common.sh@41 -- # break 00:13:29.172 21:04:43 -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.172 21:04:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.172 21:04:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:29.431 21:04:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:29.431 21:04:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:29.431 21:04:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:29.431 21:04:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.431 21:04:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.431 21:04:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:29.431 21:04:43 -- bdev/nbd_common.sh@41 -- # break 00:13:29.431 21:04:43 -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.431 21:04:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.431 21:04:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:29.689 21:04:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:29.689 21:04:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:29.689 21:04:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:29.689 21:04:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.689 21:04:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.689 21:04:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:29.689 21:04:43 -- bdev/nbd_common.sh@41 -- # break 00:13:29.689 21:04:43 -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.689 21:04:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.689 21:04:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:29.947 21:04:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:29.947 21:04:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:29.947 21:04:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:29.947 21:04:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.947 21:04:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.947 21:04:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:29.947 21:04:43 -- bdev/nbd_common.sh@41 -- # break 00:13:29.947 21:04:43 -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.947 21:04:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.947 21:04:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:30.205 21:04:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:30.205 21:04:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:30.205 21:04:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:30.205 21:04:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.205 21:04:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.205 21:04:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:30.205 21:04:44 -- bdev/nbd_common.sh@41 -- # break 00:13:30.205 21:04:44 -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.205 21:04:44 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:30.205 21:04:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.205 21:04:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:30.464 21:04:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:30.464 21:04:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:30.464 21:04:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:30.464 21:04:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:30.464 21:04:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:30.464 21:04:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:30.464 21:04:44 -- bdev/nbd_common.sh@65 -- # true 00:13:30.464 21:04:44 -- bdev/nbd_common.sh@65 -- # count=0 00:13:30.464 21:04:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:30.464 21:04:44 -- bdev/nbd_common.sh@122 -- # count=0 00:13:30.464 21:04:44 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:30.464 21:04:44 -- bdev/nbd_common.sh@127 -- # return 0 00:13:30.465 21:04:44 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@12 -- # local i 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:30.465 21:04:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:30.729 /dev/nbd0 00:13:30.729 21:04:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:30.729 21:04:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:30.729 21:04:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:30.729 21:04:44 -- common/autotest_common.sh@857 -- # local i 00:13:30.729 21:04:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:30.729 21:04:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:30.729 21:04:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:30.729 21:04:44 -- common/autotest_common.sh@861 -- # break 00:13:30.729 21:04:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:30.729 21:04:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:30.729 21:04:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.729 1+0 records in 00:13:30.729 1+0 records out 00:13:30.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000792135 s, 5.2 MB/s 00:13:30.729 21:04:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.729 21:04:44 -- common/autotest_common.sh@874 -- # size=4096 00:13:30.729 21:04:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.729 21:04:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:30.729 21:04:44 -- common/autotest_common.sh@877 -- # return 0 00:13:30.729 21:04:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.729 21:04:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:30.729 21:04:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:30.992 /dev/nbd1 00:13:30.992 21:04:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:30.992 21:04:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:30.992 21:04:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:13:30.992 21:04:44 -- common/autotest_common.sh@857 -- # local i 00:13:30.992 21:04:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:30.992 21:04:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:30.992 21:04:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:13:30.992 21:04:44 -- common/autotest_common.sh@861 -- # break 00:13:30.992 21:04:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:30.992 21:04:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:30.992 21:04:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.992 1+0 records in 00:13:30.992 1+0 records out 00:13:30.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417419 s, 9.8 MB/s 00:13:30.992 21:04:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.251 21:04:44 -- common/autotest_common.sh@874 -- # size=4096 00:13:31.251 21:04:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.251 21:04:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:31.251 21:04:44 -- common/autotest_common.sh@877 -- # return 0 00:13:31.251 21:04:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.251 21:04:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:31.251 21:04:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:13:31.251 /dev/nbd10 00:13:31.251 21:04:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:31.251 21:04:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:31.251 21:04:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:13:31.251 21:04:45 -- common/autotest_common.sh@857 -- # local i 00:13:31.251 21:04:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:31.251 21:04:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:31.251 21:04:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:13:31.251 21:04:45 -- common/autotest_common.sh@861 -- # break 00:13:31.251 21:04:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:31.251 21:04:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:31.251 21:04:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.251 1+0 records in 00:13:31.251 1+0 records out 00:13:31.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588153 s, 7.0 MB/s 00:13:31.251 21:04:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.251 21:04:45 -- common/autotest_common.sh@874 -- # size=4096 00:13:31.251 21:04:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.251 21:04:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:31.251 21:04:45 -- common/autotest_common.sh@877 -- # return 0 00:13:31.251 21:04:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.251 21:04:45 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:31.251 21:04:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:13:31.510 /dev/nbd11 00:13:31.510 21:04:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:31.510 21:04:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:31.510 21:04:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:13:31.510 21:04:45 -- common/autotest_common.sh@857 -- # local i 00:13:31.510 21:04:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:31.510 21:04:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:31.510 21:04:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:13:31.510 21:04:45 -- common/autotest_common.sh@861 -- # break 00:13:31.510 21:04:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:31.510 21:04:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:31.510 21:04:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.510 1+0 records in 00:13:31.510 1+0 records out 00:13:31.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977605 s, 4.2 MB/s 00:13:31.510 21:04:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.510 21:04:45 -- common/autotest_common.sh@874 -- # size=4096 00:13:31.510 21:04:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.510 21:04:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:31.510 21:04:45 -- common/autotest_common.sh@877 -- # return 0 00:13:31.510 21:04:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.510 21:04:45 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:31.510 21:04:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:13:31.768 /dev/nbd12 00:13:31.768 21:04:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:32.026 21:04:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:32.026 21:04:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:13:32.026 21:04:45 -- common/autotest_common.sh@857 -- # local i 00:13:32.026 21:04:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:32.026 21:04:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:32.026 21:04:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:13:32.026 21:04:45 -- common/autotest_common.sh@861 -- # break 00:13:32.026 21:04:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:32.026 21:04:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:32.026 21:04:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.026 1+0 records in 00:13:32.026 1+0 records out 00:13:32.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000869086 s, 4.7 MB/s 00:13:32.026 21:04:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.026 21:04:45 -- common/autotest_common.sh@874 -- # size=4096 00:13:32.026 21:04:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.026 21:04:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:32.026 21:04:45 -- common/autotest_common.sh@877 -- # return 0 00:13:32.026 21:04:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.026 21:04:45 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:32.026 21:04:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:32.285 /dev/nbd13 00:13:32.285 21:04:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:32.285 21:04:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:32.285 21:04:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:13:32.285 21:04:45 -- common/autotest_common.sh@857 -- # local i 00:13:32.285 21:04:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:32.285 21:04:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:32.285 21:04:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:13:32.285 21:04:45 -- common/autotest_common.sh@861 -- # break 00:13:32.285 21:04:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:32.285 21:04:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:32.285 21:04:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.285 1+0 records in 00:13:32.285 1+0 records out 00:13:32.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882953 s, 4.6 MB/s 00:13:32.285 21:04:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.285 21:04:45 -- common/autotest_common.sh@874 -- # size=4096 00:13:32.285 21:04:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.285 21:04:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:32.285 21:04:45 -- common/autotest_common.sh@877 -- # return 0 00:13:32.285 21:04:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.285 21:04:45 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:32.285 21:04:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:32.286 21:04:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.286 21:04:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:32.545 { 00:13:32.545 "nbd_device": "/dev/nbd0", 00:13:32.545 "bdev_name": "nvme0n1" 00:13:32.545 }, 00:13:32.545 { 00:13:32.545 "nbd_device": "/dev/nbd1", 00:13:32.545 "bdev_name": "nvme1n1" 00:13:32.545 }, 00:13:32.545 { 00:13:32.545 "nbd_device": "/dev/nbd10", 00:13:32.545 "bdev_name": "nvme1n2" 00:13:32.545 }, 00:13:32.545 { 00:13:32.545 "nbd_device": "/dev/nbd11", 00:13:32.545 "bdev_name": "nvme1n3" 00:13:32.545 }, 00:13:32.545 { 00:13:32.545 "nbd_device": "/dev/nbd12", 00:13:32.545 "bdev_name": "nvme2n1" 00:13:32.545 }, 00:13:32.545 { 00:13:32.545 "nbd_device": "/dev/nbd13", 00:13:32.545 "bdev_name": "nvme3n1" 00:13:32.545 } 00:13:32.545 ]' 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:32.545 { 00:13:32.545 "nbd_device": "/dev/nbd0", 00:13:32.545 "bdev_name": "nvme0n1" 00:13:32.545 }, 00:13:32.545 { 00:13:32.545 "nbd_device": "/dev/nbd1", 00:13:32.545 "bdev_name": "nvme1n1" 00:13:32.545 }, 00:13:32.545 { 00:13:32.545 "nbd_device": "/dev/nbd10", 00:13:32.545 "bdev_name": "nvme1n2" 00:13:32.545 }, 00:13:32.545 { 00:13:32.545 "nbd_device": "/dev/nbd11", 00:13:32.545 "bdev_name": "nvme1n3" 00:13:32.545 }, 00:13:32.545 { 00:13:32.545 "nbd_device": "/dev/nbd12", 00:13:32.545 "bdev_name": "nvme2n1" 00:13:32.545 }, 00:13:32.545 { 00:13:32.545 "nbd_device": "/dev/nbd13", 00:13:32.545 "bdev_name": "nvme3n1" 00:13:32.545 } 00:13:32.545 ]' 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:32.545 /dev/nbd1 00:13:32.545 /dev/nbd10 00:13:32.545 /dev/nbd11 00:13:32.545 /dev/nbd12 00:13:32.545 /dev/nbd13' 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:32.545 /dev/nbd1 00:13:32.545 /dev/nbd10 00:13:32.545 /dev/nbd11 00:13:32.545 /dev/nbd12 00:13:32.545 /dev/nbd13' 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@65 -- # count=6 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@66 -- # echo 6 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@95 -- # count=6 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:32.545 256+0 records in 00:13:32.545 256+0 records out 00:13:32.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00850133 s, 123 MB/s 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:32.545 21:04:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:32.804 256+0 records in 00:13:32.804 256+0 records out 00:13:32.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136559 s, 7.7 MB/s 00:13:32.804 21:04:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:32.804 21:04:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:32.804 256+0 records in 00:13:32.804 256+0 records out 00:13:32.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171955 s, 6.1 MB/s 00:13:32.804 21:04:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:32.804 21:04:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:33.063 256+0 records in 00:13:33.063 256+0 records out 00:13:33.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147496 s, 7.1 MB/s 00:13:33.063 21:04:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:33.063 21:04:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:33.063 256+0 records in 00:13:33.063 256+0 records out 00:13:33.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147229 s, 7.1 MB/s 00:13:33.063 21:04:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:33.063 21:04:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:33.322 256+0 records in 00:13:33.322 256+0 records out 00:13:33.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164828 s, 6.4 MB/s 00:13:33.322 21:04:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:33.322 21:04:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:33.582 256+0 records in 00:13:33.582 256+0 records out 00:13:33.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173161 s, 6.1 MB/s 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@51 -- # local i 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.582 21:04:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:33.841 21:04:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.841 21:04:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.841 21:04:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.841 21:04:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.841 21:04:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.841 21:04:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.841 21:04:47 -- bdev/nbd_common.sh@41 -- # break 00:13:33.841 21:04:47 -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.841 21:04:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.841 21:04:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:34.101 21:04:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:34.101 21:04:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:34.101 21:04:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:34.101 21:04:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.101 21:04:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.101 21:04:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:34.101 21:04:47 -- bdev/nbd_common.sh@41 -- # break 00:13:34.101 21:04:47 -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.101 21:04:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.101 21:04:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:34.360 21:04:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:34.360 21:04:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:34.360 21:04:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:34.360 21:04:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.360 21:04:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.360 21:04:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:34.360 21:04:48 -- bdev/nbd_common.sh@41 -- # break 00:13:34.360 21:04:48 -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.360 21:04:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.360 21:04:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:34.618 21:04:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:34.618 21:04:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:34.618 21:04:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:34.618 21:04:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.618 21:04:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.618 21:04:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:34.618 21:04:48 -- bdev/nbd_common.sh@41 -- # break 00:13:34.618 21:04:48 -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.618 21:04:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.618 21:04:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:34.876 21:04:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:34.876 21:04:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:34.876 21:04:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:34.876 21:04:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.876 21:04:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.876 21:04:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:34.876 21:04:48 -- bdev/nbd_common.sh@41 -- # break 00:13:34.876 21:04:48 -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.876 21:04:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.876 21:04:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:35.135 21:04:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:35.135 21:04:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:35.135 21:04:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:35.135 21:04:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.135 21:04:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.135 21:04:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:35.135 21:04:48 -- bdev/nbd_common.sh@41 -- # break 00:13:35.135 21:04:48 -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.135 21:04:48 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:35.135 21:04:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.135 21:04:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@65 -- # true 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@65 -- # count=0 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@104 -- # count=0 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@109 -- # return 0 00:13:35.392 21:04:49 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:35.392 21:04:49 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:35.650 malloc_lvol_verify 00:13:35.651 21:04:49 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:35.954 cb8a492e-e922-4d45-9409-81d4c0280b39 00:13:35.954 21:04:49 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:36.212 fc8f7cf4-0bf0-428e-a86f-73b5b4b9de5f 00:13:36.212 21:04:50 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:36.470 /dev/nbd0 00:13:36.470 21:04:50 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:36.470 mke2fs 1.46.5 (30-Dec-2021) 00:13:36.470 Discarding device blocks: 0/4096 done 00:13:36.470 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:36.470 00:13:36.470 Allocating group tables: 0/1 done 00:13:36.470 Writing inode tables: 0/1 done 00:13:36.470 Creating journal (1024 blocks): done 00:13:36.470 Writing superblocks and filesystem accounting information: 0/1 done 00:13:36.470 00:13:36.470 21:04:50 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:36.470 21:04:50 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:36.470 21:04:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:36.470 21:04:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:36.470 21:04:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.470 21:04:50 -- bdev/nbd_common.sh@51 -- # local i 00:13:36.470 21:04:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.470 21:04:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:36.728 21:04:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:36.728 21:04:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:36.728 21:04:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:36.728 21:04:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.728 21:04:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.728 21:04:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:36.728 21:04:50 -- bdev/nbd_common.sh@41 -- # break 00:13:36.729 21:04:50 -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.729 21:04:50 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:36.729 21:04:50 -- bdev/nbd_common.sh@147 -- # return 0 00:13:36.729 21:04:50 -- bdev/blockdev.sh@324 -- # killprocess 68464 00:13:36.729 21:04:50 -- common/autotest_common.sh@926 -- # '[' -z 68464 ']' 00:13:36.729 21:04:50 -- common/autotest_common.sh@930 -- # kill -0 68464 00:13:36.729 21:04:50 -- common/autotest_common.sh@931 -- # uname 00:13:36.729 21:04:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:36.729 21:04:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68464 00:13:36.729 killing process with pid 68464 00:13:36.729 21:04:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:36.729 21:04:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:36.729 21:04:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68464' 00:13:36.729 21:04:50 -- common/autotest_common.sh@945 -- # kill 68464 00:13:36.729 21:04:50 -- common/autotest_common.sh@950 -- # wait 68464 00:13:38.106 ************************************ 00:13:38.106 END TEST bdev_nbd 00:13:38.106 ************************************ 00:13:38.106 21:04:51 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:13:38.106 00:13:38.106 real 0m12.021s 00:13:38.106 user 0m17.023s 00:13:38.106 sys 0m3.848s 00:13:38.106 21:04:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.106 21:04:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.106 21:04:51 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:13:38.106 21:04:51 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:13:38.106 21:04:51 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:13:38.106 21:04:51 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:13:38.106 21:04:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:38.106 21:04:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.106 21:04:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.106 ************************************ 00:13:38.106 START TEST bdev_fio 00:13:38.106 ************************************ 00:13:38.106 21:04:51 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:13:38.106 21:04:51 -- bdev/blockdev.sh@329 -- # local env_context 00:13:38.106 21:04:51 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:38.106 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:38.106 21:04:51 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:38.106 21:04:51 -- bdev/blockdev.sh@337 -- # echo '' 00:13:38.106 21:04:51 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:13:38.106 21:04:51 -- bdev/blockdev.sh@337 -- # env_context= 00:13:38.106 21:04:51 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:38.106 21:04:51 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:38.106 21:04:51 -- common/autotest_common.sh@1260 -- # local workload=verify 00:13:38.106 21:04:51 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:13:38.106 21:04:51 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:38.106 21:04:51 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:38.106 21:04:51 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:38.106 21:04:51 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:13:38.106 21:04:51 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:38.106 21:04:51 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:38.106 21:04:51 -- common/autotest_common.sh@1280 -- # cat 00:13:38.106 21:04:51 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:13:38.106 21:04:51 -- common/autotest_common.sh@1293 -- # cat 00:13:38.106 21:04:51 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:13:38.106 21:04:51 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:13:38.106 21:04:51 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:38.106 21:04:51 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:13:38.106 21:04:51 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:38.106 21:04:51 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:13:38.106 21:04:51 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:13:38.106 21:04:51 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:38.106 21:04:51 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:13:38.106 21:04:51 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:13:38.106 21:04:51 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:38.106 21:04:51 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:13:38.106 21:04:51 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:13:38.106 21:04:51 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:38.106 21:04:51 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:13:38.106 21:04:51 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:13:38.106 21:04:51 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:38.106 21:04:51 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:13:38.106 21:04:51 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:13:38.106 21:04:51 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:38.106 21:04:51 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:13:38.106 21:04:51 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:13:38.106 21:04:51 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:38.106 21:04:51 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:38.106 21:04:51 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:38.106 21:04:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.106 21:04:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.106 ************************************ 00:13:38.106 START TEST bdev_fio_rw_verify 00:13:38.106 ************************************ 00:13:38.106 21:04:51 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:38.106 21:04:51 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:38.106 21:04:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:38.106 21:04:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:38.106 21:04:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:38.106 21:04:51 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.106 21:04:51 -- common/autotest_common.sh@1320 -- # shift 00:13:38.106 21:04:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:38.106 21:04:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:38.106 21:04:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.106 21:04:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:38.106 21:04:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:38.106 21:04:51 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:38.106 21:04:51 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:38.106 21:04:51 -- common/autotest_common.sh@1326 -- # break 00:13:38.106 21:04:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:38.106 21:04:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:38.106 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:38.106 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:38.106 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:38.106 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:38.106 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:38.106 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:38.106 fio-3.35 00:13:38.107 Starting 6 threads 00:13:50.309 00:13:50.310 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68880: Sat Jul 13 21:05:02 2024 00:13:50.310 read: IOPS=27.3k, BW=107MiB/s (112MB/s)(1068MiB/10001msec) 00:13:50.310 slat (usec): min=3, max=1271, avg= 6.95, stdev= 5.12 00:13:50.310 clat (usec): min=92, max=4966, avg=685.29, stdev=233.66 00:13:50.310 lat (usec): min=99, max=4971, avg=692.23, stdev=234.36 00:13:50.310 clat percentiles (usec): 00:13:50.310 | 50.000th=[ 709], 99.000th=[ 1270], 99.900th=[ 1795], 99.990th=[ 3654], 00:13:50.310 | 99.999th=[ 3982] 00:13:50.310 write: IOPS=27.7k, BW=108MiB/s (113MB/s)(1081MiB/10001msec); 0 zone resets 00:13:50.310 slat (usec): min=10, max=1966, avg=27.15, stdev=27.35 00:13:50.310 clat (usec): min=105, max=8577, avg=774.38, stdev=246.41 00:13:50.310 lat (usec): min=126, max=8601, avg=801.53, stdev=248.83 00:13:50.310 clat percentiles (usec): 00:13:50.310 | 50.000th=[ 775], 99.000th=[ 1467], 99.900th=[ 2024], 99.990th=[ 3064], 00:13:50.310 | 99.999th=[ 8586] 00:13:50.310 bw ( KiB/s): min=98172, max=138048, per=100.00%, avg=110754.53, stdev=2015.02, samples=114 00:13:50.310 iops : min=24541, max=34512, avg=27688.16, stdev=503.78, samples=114 00:13:50.310 lat (usec) : 100=0.01%, 250=2.06%, 500=14.72%, 750=35.70%, 1000=36.97% 00:13:50.310 lat (msec) : 2=10.47%, 4=0.09%, 10=0.01% 00:13:50.310 cpu : usr=62.10%, sys=24.60%, ctx=7366, majf=0, minf=25311 00:13:50.310 IO depths : 1=11.9%, 2=24.3%, 4=50.7%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.310 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.310 issued rwts: total=273331,276633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.310 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:50.310 00:13:50.310 Run status group 0 (all jobs): 00:13:50.310 READ: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=1068MiB (1120MB), run=10001-10001msec 00:13:50.310 WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=1081MiB (1133MB), run=10001-10001msec 00:13:50.310 ----------------------------------------------------- 00:13:50.310 Suppressions used: 00:13:50.310 count bytes template 00:13:50.310 6 48 /usr/src/fio/parse.c 00:13:50.310 3122 299712 /usr/src/fio/iolog.c 00:13:50.310 1 8 libtcmalloc_minimal.so 00:13:50.310 1 904 libcrypto.so 00:13:50.310 ----------------------------------------------------- 00:13:50.310 00:13:50.310 00:13:50.310 real 0m12.231s 00:13:50.310 user 0m39.127s 00:13:50.310 sys 0m15.159s 00:13:50.310 21:05:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.310 21:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:50.310 ************************************ 00:13:50.310 END TEST bdev_fio_rw_verify 00:13:50.310 ************************************ 00:13:50.310 21:05:04 -- bdev/blockdev.sh@348 -- # rm -f 00:13:50.310 21:05:04 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:50.310 21:05:04 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:50.310 21:05:04 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:50.310 21:05:04 -- common/autotest_common.sh@1260 -- # local workload=trim 00:13:50.310 21:05:04 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:13:50.310 21:05:04 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:50.310 21:05:04 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:50.310 21:05:04 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:50.310 21:05:04 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:13:50.310 21:05:04 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:50.310 21:05:04 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:50.310 21:05:04 -- common/autotest_common.sh@1280 -- # cat 00:13:50.310 21:05:04 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:13:50.310 21:05:04 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:13:50.310 21:05:04 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:13:50.310 21:05:04 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:50.310 21:05:04 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "676c2454-654c-4eff-83d5-d4b776688e87"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "676c2454-654c-4eff-83d5-d4b776688e87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "47a510ef-719f-471c-ba3f-c4594c2151b9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "47a510ef-719f-471c-ba3f-c4594c2151b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "2765714a-7fb7-4f66-a91d-731b4357e18e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2765714a-7fb7-4f66-a91d-731b4357e18e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "218ba5ff-30b5-4e84-ab17-c2a92491b27a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "218ba5ff-30b5-4e84-ab17-c2a92491b27a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a6ec0977-8688-4ed6-b3a4-7bf922634f19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a6ec0977-8688-4ed6-b3a4-7bf922634f19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "200a7db0-1756-464e-bf23-7bd8db50d6b0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "200a7db0-1756-464e-bf23-7bd8db50d6b0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:50.310 21:05:04 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:13:50.310 21:05:04 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:50.310 21:05:04 -- bdev/blockdev.sh@360 -- # popd 00:13:50.310 /home/vagrant/spdk_repo/spdk 00:13:50.310 21:05:04 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:13:50.310 21:05:04 -- bdev/blockdev.sh@362 -- # return 0 00:13:50.310 00:13:50.310 real 0m12.410s 00:13:50.310 user 0m39.221s 00:13:50.310 sys 0m15.242s 00:13:50.310 21:05:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.310 21:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:50.310 ************************************ 00:13:50.310 END TEST bdev_fio 00:13:50.310 ************************************ 00:13:50.310 21:05:04 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:50.310 21:05:04 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:50.310 21:05:04 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:50.310 21:05:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:50.310 21:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:50.310 ************************************ 00:13:50.310 START TEST bdev_verify 00:13:50.310 ************************************ 00:13:50.310 21:05:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:50.569 [2024-07-13 21:05:04.263302] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:50.569 [2024-07-13 21:05:04.263472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69051 ] 00:13:50.569 [2024-07-13 21:05:04.437227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:50.828 [2024-07-13 21:05:04.654060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.828 [2024-07-13 21:05:04.654072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.395 Running I/O for 5 seconds... 00:13:56.663 00:13:56.663 Latency(us) 00:13:56.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.663 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.663 Verification LBA range: start 0x0 length 0x20000 00:13:56.663 nvme0n1 : 5.07 2630.46 10.28 0.00 0.00 48414.34 15490.33 68157.44 00:13:56.663 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.663 Verification LBA range: start 0x20000 length 0x20000 00:13:56.663 nvme0n1 : 5.09 2654.82 10.37 0.00 0.00 47901.57 7238.75 69587.32 00:13:56.663 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.663 Verification LBA range: start 0x0 length 0x80000 00:13:56.663 nvme1n1 : 5.08 2666.37 10.42 0.00 0.00 47739.36 4200.26 69110.69 00:13:56.663 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.663 Verification LBA range: start 0x80000 length 0x80000 00:13:56.663 nvme1n1 : 5.08 2541.89 9.93 0.00 0.00 49974.96 6076.97 66727.56 00:13:56.663 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.663 Verification LBA range: start 0x0 length 0x80000 00:13:56.663 nvme1n2 : 5.08 2585.36 10.10 0.00 0.00 49235.58 5362.04 62914.56 00:13:56.663 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.663 Verification LBA range: start 0x80000 length 0x80000 00:13:56.663 nvme1n2 : 5.09 2452.42 9.58 0.00 0.00 51826.88 12749.73 64344.44 00:13:56.663 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.663 Verification LBA range: start 0x0 length 0x80000 00:13:56.663 nvme1n3 : 5.08 2531.93 9.89 0.00 0.00 50097.11 5719.51 67204.19 00:13:56.663 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.663 Verification LBA range: start 0x80000 length 0x80000 00:13:56.663 nvme1n3 : 5.09 2403.25 9.39 0.00 0.00 52828.29 11796.48 67204.19 00:13:56.663 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.663 Verification LBA range: start 0x0 length 0xbd0bd 00:13:56.663 nvme2n1 : 5.07 2937.71 11.48 0.00 0.00 43164.93 10068.71 68157.44 00:13:56.663 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.663 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:56.663 nvme2n1 : 5.09 2892.83 11.30 0.00 0.00 43841.38 5838.66 64821.06 00:13:56.663 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.663 Verification LBA range: start 0x0 length 0xa0000 00:13:56.663 nvme3n1 : 5.08 2528.92 9.88 0.00 0.00 50041.37 9055.88 76260.07 00:13:56.663 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.663 Verification LBA range: start 0xa0000 length 0xa0000 00:13:56.663 nvme3n1 : 5.09 2542.93 9.93 0.00 0.00 49777.72 3991.74 71493.82 00:13:56.663 =================================================================================================================== 00:13:56.663 Total : 31368.89 122.53 0.00 0.00 48578.04 3991.74 76260.07 00:13:57.599 00:13:57.599 real 0m7.188s 00:13:57.599 user 0m9.377s 00:13:57.599 sys 0m3.266s 00:13:57.599 21:05:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.599 21:05:11 -- common/autotest_common.sh@10 -- # set +x 00:13:57.599 ************************************ 00:13:57.599 END TEST bdev_verify 00:13:57.599 ************************************ 00:13:57.599 21:05:11 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:57.599 21:05:11 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:57.599 21:05:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:57.599 21:05:11 -- common/autotest_common.sh@10 -- # set +x 00:13:57.599 ************************************ 00:13:57.599 START TEST bdev_verify_big_io 00:13:57.599 ************************************ 00:13:57.599 21:05:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:57.599 [2024-07-13 21:05:11.494740] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:57.599 [2024-07-13 21:05:11.494932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69160 ] 00:13:57.857 [2024-07-13 21:05:11.666298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:58.115 [2024-07-13 21:05:11.836328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.115 [2024-07-13 21:05:11.836351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.682 Running I/O for 5 seconds... 00:14:05.246 00:14:05.246 Latency(us) 00:14:05.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.246 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:05.246 Verification LBA range: start 0x0 length 0x2000 00:14:05.246 nvme0n1 : 5.57 249.78 15.61 0.00 0.00 500434.46 127735.62 713031.68 00:14:05.246 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:05.246 Verification LBA range: start 0x2000 length 0x2000 00:14:05.246 nvme0n1 : 5.69 244.32 15.27 0.00 0.00 512403.97 59578.18 667275.64 00:14:05.246 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:05.246 Verification LBA range: start 0x0 length 0x8000 00:14:05.246 nvme1n1 : 5.62 262.87 16.43 0.00 0.00 466623.25 55288.55 625332.60 00:14:05.246 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:05.246 Verification LBA range: start 0x8000 length 0x8000 00:14:05.246 nvme1n1 : 5.68 260.08 16.25 0.00 0.00 470340.16 66250.94 583389.56 00:14:05.246 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:05.246 Verification LBA range: start 0x0 length 0x8000 00:14:05.246 nvme1n2 : 5.62 230.61 14.41 0.00 0.00 519092.98 59816.49 598641.57 00:14:05.246 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:05.246 Verification LBA range: start 0x8000 length 0x8000 00:14:05.246 nvme1n2 : 5.70 244.23 15.26 0.00 0.00 487100.85 58148.31 632958.60 00:14:05.246 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:05.246 Verification LBA range: start 0x0 length 0x8000 00:14:05.246 nvme1n3 : 5.57 249.61 15.60 0.00 0.00 471842.24 57909.99 632958.60 00:14:05.246 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:05.246 Verification LBA range: start 0x8000 length 0x8000 00:14:05.246 nvme1n3 : 5.70 259.36 16.21 0.00 0.00 446769.99 14417.92 415617.40 00:14:05.246 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:05.246 Verification LBA range: start 0x0 length 0xbd0b 00:14:05.246 nvme2n1 : 5.68 275.87 17.24 0.00 0.00 417542.15 60054.81 404178.39 00:14:05.246 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:05.246 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:05.246 nvme2n1 : 5.70 259.18 16.20 0.00 0.00 442252.90 2681.02 415617.40 00:14:05.246 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:05.246 Verification LBA range: start 0x0 length 0xa000 00:14:05.246 nvme3n1 : 5.70 259.52 16.22 0.00 0.00 438282.51 195.49 655836.63 00:14:05.246 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:05.246 Verification LBA range: start 0xa000 length 0xa000 00:14:05.246 nvme3n1 : 5.72 257.56 16.10 0.00 0.00 437936.63 169.43 644397.61 00:14:05.246 =================================================================================================================== 00:14:05.246 Total : 3052.99 190.81 0.00 0.00 466254.04 169.43 713031.68 00:14:05.505 00:14:05.505 real 0m7.962s 00:14:05.505 user 0m14.126s 00:14:05.505 sys 0m0.670s 00:14:05.505 21:05:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.505 ************************************ 00:14:05.505 END TEST bdev_verify_big_io 00:14:05.505 ************************************ 00:14:05.505 21:05:19 -- common/autotest_common.sh@10 -- # set +x 00:14:05.505 21:05:19 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:05.505 21:05:19 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:05.505 21:05:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:05.505 21:05:19 -- common/autotest_common.sh@10 -- # set +x 00:14:05.505 ************************************ 00:14:05.505 START TEST bdev_write_zeroes 00:14:05.505 ************************************ 00:14:05.505 21:05:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:05.763 [2024-07-13 21:05:19.498121] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:05.763 [2024-07-13 21:05:19.498292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69267 ] 00:14:05.763 [2024-07-13 21:05:19.661065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.022 [2024-07-13 21:05:19.838705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.590 Running I/O for 1 seconds... 00:14:07.526 00:14:07.526 Latency(us) 00:14:07.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.526 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.526 nvme0n1 : 1.01 11987.82 46.83 0.00 0.00 10666.62 6851.49 18588.39 00:14:07.526 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.526 nvme1n1 : 1.02 11970.04 46.76 0.00 0.00 10674.16 6940.86 17754.30 00:14:07.526 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.526 nvme1n2 : 1.02 11952.54 46.69 0.00 0.00 10680.37 7030.23 16801.05 00:14:07.526 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.526 nvme1n3 : 1.02 11935.12 46.62 0.00 0.00 10687.81 7030.23 17992.61 00:14:07.526 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.526 nvme2n1 : 1.01 17350.13 67.77 0.00 0.00 7343.47 3038.49 12988.04 00:14:07.526 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.526 nvme3n1 : 1.02 11918.97 46.56 0.00 0.00 10630.35 6702.55 19422.49 00:14:07.526 =================================================================================================================== 00:14:07.526 Total : 77114.62 301.23 0.00 0.00 9922.62 3038.49 19422.49 00:14:08.460 00:14:08.460 real 0m2.868s 00:14:08.460 user 0m2.135s 00:14:08.460 sys 0m0.533s 00:14:08.460 21:05:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.460 21:05:22 -- common/autotest_common.sh@10 -- # set +x 00:14:08.460 ************************************ 00:14:08.460 END TEST bdev_write_zeroes 00:14:08.460 ************************************ 00:14:08.460 21:05:22 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:08.460 21:05:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:08.460 21:05:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:08.460 21:05:22 -- common/autotest_common.sh@10 -- # set +x 00:14:08.460 ************************************ 00:14:08.461 START TEST bdev_json_nonenclosed 00:14:08.461 ************************************ 00:14:08.461 21:05:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:08.719 [2024-07-13 21:05:22.429985] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:08.719 [2024-07-13 21:05:22.430195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69316 ] 00:14:08.719 [2024-07-13 21:05:22.604738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.977 [2024-07-13 21:05:22.789101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.977 [2024-07-13 21:05:22.789303] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:08.977 [2024-07-13 21:05:22.789332] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:09.543 00:14:09.543 real 0m0.836s 00:14:09.543 user 0m0.596s 00:14:09.543 sys 0m0.133s 00:14:09.543 21:05:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:09.543 21:05:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.543 ************************************ 00:14:09.543 END TEST bdev_json_nonenclosed 00:14:09.543 ************************************ 00:14:09.543 21:05:23 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:09.543 21:05:23 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:09.543 21:05:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:09.543 21:05:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.543 ************************************ 00:14:09.543 START TEST bdev_json_nonarray 00:14:09.543 ************************************ 00:14:09.543 21:05:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:09.543 [2024-07-13 21:05:23.309116] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:09.543 [2024-07-13 21:05:23.309265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69347 ] 00:14:09.802 [2024-07-13 21:05:23.472933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.802 [2024-07-13 21:05:23.661485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.802 [2024-07-13 21:05:23.661717] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:09.802 [2024-07-13 21:05:23.661746] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:10.369 00:14:10.369 real 0m0.819s 00:14:10.369 user 0m0.595s 00:14:10.369 sys 0m0.117s 00:14:10.369 21:05:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.369 21:05:24 -- common/autotest_common.sh@10 -- # set +x 00:14:10.369 ************************************ 00:14:10.369 END TEST bdev_json_nonarray 00:14:10.369 ************************************ 00:14:10.369 21:05:24 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:14:10.369 21:05:24 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:14:10.369 21:05:24 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:14:10.369 21:05:24 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:10.369 21:05:24 -- bdev/blockdev.sh@809 -- # cleanup 00:14:10.369 21:05:24 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:10.369 21:05:24 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:10.369 21:05:24 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:14:10.369 21:05:24 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:14:10.369 21:05:24 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:14:10.369 21:05:24 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:14:10.369 21:05:24 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:11.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:13.833 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:14:13.833 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:14:13.833 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:14:13.833 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:14:13.833 00:14:13.833 real 1m2.243s 00:14:13.833 user 1m42.082s 00:14:13.833 sys 0m32.206s 00:14:13.833 21:05:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.833 ************************************ 00:14:13.833 END TEST blockdev_xnvme 00:14:13.833 ************************************ 00:14:13.833 21:05:27 -- common/autotest_common.sh@10 -- # set +x 00:14:13.833 21:05:27 -- spdk/autotest.sh@259 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:13.833 21:05:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:13.833 21:05:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:13.833 21:05:27 -- common/autotest_common.sh@10 -- # set +x 00:14:13.833 ************************************ 00:14:13.833 START TEST ublk 00:14:13.833 ************************************ 00:14:13.833 21:05:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:13.833 * Looking for test storage... 00:14:13.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:13.833 21:05:27 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:13.833 21:05:27 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:13.833 21:05:27 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:13.833 21:05:27 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:13.833 21:05:27 -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:13.833 21:05:27 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:13.833 21:05:27 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:13.833 21:05:27 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:13.833 21:05:27 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:13.833 21:05:27 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:13.833 21:05:27 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:13.833 21:05:27 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:13.833 21:05:27 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:13.833 21:05:27 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:13.833 21:05:27 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:13.833 21:05:27 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:13.833 21:05:27 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:13.833 21:05:27 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:13.833 21:05:27 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:13.833 21:05:27 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:13.833 21:05:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:13.833 21:05:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:13.833 21:05:27 -- common/autotest_common.sh@10 -- # set +x 00:14:13.833 ************************************ 00:14:13.833 START TEST test_save_ublk_config 00:14:13.833 ************************************ 00:14:13.833 21:05:27 -- common/autotest_common.sh@1104 -- # test_save_config 00:14:13.833 21:05:27 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:13.833 21:05:27 -- ublk/ublk.sh@103 -- # tgtpid=69645 00:14:13.833 21:05:27 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:13.833 21:05:27 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:13.833 21:05:27 -- ublk/ublk.sh@106 -- # waitforlisten 69645 00:14:13.833 21:05:27 -- common/autotest_common.sh@819 -- # '[' -z 69645 ']' 00:14:13.833 21:05:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.833 21:05:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:13.833 21:05:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.833 21:05:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:13.833 21:05:27 -- common/autotest_common.sh@10 -- # set +x 00:14:13.833 [2024-07-13 21:05:27.749900] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:13.833 [2024-07-13 21:05:27.750078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69645 ] 00:14:14.091 [2024-07-13 21:05:27.923954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.349 [2024-07-13 21:05:28.166792] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:14.349 [2024-07-13 21:05:28.167119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.724 21:05:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:15.724 21:05:29 -- common/autotest_common.sh@852 -- # return 0 00:14:15.724 21:05:29 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:15.724 21:05:29 -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:15.724 21:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.724 21:05:29 -- common/autotest_common.sh@10 -- # set +x 00:14:15.724 [2024-07-13 21:05:29.442179] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:15.724 malloc0 00:14:15.724 [2024-07-13 21:05:29.513198] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:15.724 [2024-07-13 21:05:29.513372] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:15.724 [2024-07-13 21:05:29.513386] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:15.724 [2024-07-13 21:05:29.513397] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:15.724 [2024-07-13 21:05:29.520888] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:15.724 [2024-07-13 21:05:29.520922] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:15.724 [2024-07-13 21:05:29.526943] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:15.724 [2024-07-13 21:05:29.527069] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:15.724 [2024-07-13 21:05:29.542941] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:15.724 0 00:14:15.724 21:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.724 21:05:29 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:15.724 21:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.724 21:05:29 -- common/autotest_common.sh@10 -- # set +x 00:14:15.982 21:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.982 21:05:29 -- ublk/ublk.sh@115 -- # config='{ 00:14:15.982 "subsystems": [ 00:14:15.982 { 00:14:15.982 "subsystem": "iobuf", 00:14:15.982 "config": [ 00:14:15.982 { 00:14:15.982 "method": "iobuf_set_options", 00:14:15.982 "params": { 00:14:15.982 "small_pool_count": 8192, 00:14:15.982 "large_pool_count": 1024, 00:14:15.982 "small_bufsize": 8192, 00:14:15.982 "large_bufsize": 135168 00:14:15.982 } 00:14:15.982 } 00:14:15.982 ] 00:14:15.982 }, 00:14:15.982 { 00:14:15.982 "subsystem": "sock", 00:14:15.982 "config": [ 00:14:15.982 { 00:14:15.982 "method": "sock_impl_set_options", 00:14:15.982 "params": { 00:14:15.982 "impl_name": "posix", 00:14:15.982 "recv_buf_size": 2097152, 00:14:15.982 "send_buf_size": 2097152, 00:14:15.982 "enable_recv_pipe": true, 00:14:15.982 "enable_quickack": false, 00:14:15.982 "enable_placement_id": 0, 00:14:15.982 "enable_zerocopy_send_server": true, 00:14:15.982 "enable_zerocopy_send_client": false, 00:14:15.982 "zerocopy_threshold": 0, 00:14:15.982 "tls_version": 0, 00:14:15.982 "enable_ktls": false 00:14:15.982 } 00:14:15.982 }, 00:14:15.982 { 00:14:15.982 "method": "sock_impl_set_options", 00:14:15.982 "params": { 00:14:15.982 "impl_name": "ssl", 00:14:15.982 "recv_buf_size": 4096, 00:14:15.982 "send_buf_size": 4096, 00:14:15.982 "enable_recv_pipe": true, 00:14:15.982 "enable_quickack": false, 00:14:15.982 "enable_placement_id": 0, 00:14:15.982 "enable_zerocopy_send_server": true, 00:14:15.982 "enable_zerocopy_send_client": false, 00:14:15.982 "zerocopy_threshold": 0, 00:14:15.982 "tls_version": 0, 00:14:15.982 "enable_ktls": false 00:14:15.982 } 00:14:15.982 } 00:14:15.982 ] 00:14:15.982 }, 00:14:15.982 { 00:14:15.982 "subsystem": "vmd", 00:14:15.982 "config": [] 00:14:15.982 }, 00:14:15.982 { 00:14:15.982 "subsystem": "accel", 00:14:15.982 "config": [ 00:14:15.982 { 00:14:15.982 "method": "accel_set_options", 00:14:15.982 "params": { 00:14:15.982 "small_cache_size": 128, 00:14:15.982 "large_cache_size": 16, 00:14:15.982 "task_count": 2048, 00:14:15.982 "sequence_count": 2048, 00:14:15.982 "buf_count": 2048 00:14:15.982 } 00:14:15.982 } 00:14:15.982 ] 00:14:15.982 }, 00:14:15.982 { 00:14:15.982 "subsystem": "bdev", 00:14:15.982 "config": [ 00:14:15.982 { 00:14:15.982 "method": "bdev_set_options", 00:14:15.982 "params": { 00:14:15.982 "bdev_io_pool_size": 65535, 00:14:15.982 "bdev_io_cache_size": 256, 00:14:15.982 "bdev_auto_examine": true, 00:14:15.982 "iobuf_small_cache_size": 128, 00:14:15.982 "iobuf_large_cache_size": 16 00:14:15.982 } 00:14:15.982 }, 00:14:15.982 { 00:14:15.982 "method": "bdev_raid_set_options", 00:14:15.982 "params": { 00:14:15.982 "process_window_size_kb": 1024 00:14:15.982 } 00:14:15.982 }, 00:14:15.982 { 00:14:15.982 "method": "bdev_iscsi_set_options", 00:14:15.982 "params": { 00:14:15.982 "timeout_sec": 30 00:14:15.982 } 00:14:15.982 }, 00:14:15.982 { 00:14:15.982 "method": "bdev_nvme_set_options", 00:14:15.982 "params": { 00:14:15.982 "action_on_timeout": "none", 00:14:15.982 "timeout_us": 0, 00:14:15.982 "timeout_admin_us": 0, 00:14:15.982 "keep_alive_timeout_ms": 10000, 00:14:15.982 "transport_retry_count": 4, 00:14:15.982 "arbitration_burst": 0, 00:14:15.982 "low_priority_weight": 0, 00:14:15.982 "medium_priority_weight": 0, 00:14:15.982 "high_priority_weight": 0, 00:14:15.982 "nvme_adminq_poll_period_us": 10000, 00:14:15.982 "nvme_ioq_poll_period_us": 0, 00:14:15.982 "io_queue_requests": 0, 00:14:15.982 "delay_cmd_submit": true, 00:14:15.982 "bdev_retry_count": 3, 00:14:15.982 "transport_ack_timeout": 0, 00:14:15.982 "ctrlr_loss_timeout_sec": 0, 00:14:15.982 "reconnect_delay_sec": 0, 00:14:15.982 "fast_io_fail_timeout_sec": 0, 00:14:15.982 "generate_uuids": false, 00:14:15.982 "transport_tos": 0, 00:14:15.982 "io_path_stat": false, 00:14:15.982 "allow_accel_sequence": false 00:14:15.982 } 00:14:15.982 }, 00:14:15.982 { 00:14:15.982 "method": "bdev_nvme_set_hotplug", 00:14:15.982 "params": { 00:14:15.982 "period_us": 100000, 00:14:15.982 "enable": false 00:14:15.982 } 00:14:15.982 }, 00:14:15.983 { 00:14:15.983 "method": "bdev_malloc_create", 00:14:15.983 "params": { 00:14:15.983 "name": "malloc0", 00:14:15.983 "num_blocks": 8192, 00:14:15.983 "block_size": 4096, 00:14:15.983 "physical_block_size": 4096, 00:14:15.983 "uuid": "2d1cfaa5-5455-4e9a-9114-8f2bf54d96b3", 00:14:15.983 "optimal_io_boundary": 0 00:14:15.983 } 00:14:15.983 }, 00:14:15.983 { 00:14:15.983 "method": "bdev_wait_for_examine" 00:14:15.983 } 00:14:15.983 ] 00:14:15.983 }, 00:14:15.983 { 00:14:15.983 "subsystem": "scsi", 00:14:15.983 "config": null 00:14:15.983 }, 00:14:15.983 { 00:14:15.983 "subsystem": "scheduler", 00:14:15.983 "config": [ 00:14:15.983 { 00:14:15.983 "method": "framework_set_scheduler", 00:14:15.983 "params": { 00:14:15.983 "name": "static" 00:14:15.983 } 00:14:15.983 } 00:14:15.983 ] 00:14:15.983 }, 00:14:15.983 { 00:14:15.983 "subsystem": "vhost_scsi", 00:14:15.983 "config": [] 00:14:15.983 }, 00:14:15.983 { 00:14:15.983 "subsystem": "vhost_blk", 00:14:15.983 "config": [] 00:14:15.983 }, 00:14:15.983 { 00:14:15.983 "subsystem": "ublk", 00:14:15.983 "config": [ 00:14:15.983 { 00:14:15.983 "method": "ublk_create_target", 00:14:15.983 "params": { 00:14:15.983 "cpumask": "1" 00:14:15.983 } 00:14:15.983 }, 00:14:15.983 { 00:14:15.983 "method": "ublk_start_disk", 00:14:15.983 "params": { 00:14:15.983 "bdev_name": "malloc0", 00:14:15.983 "ublk_id": 0, 00:14:15.983 "num_queues": 1, 00:14:15.983 "queue_depth": 128 00:14:15.983 } 00:14:15.983 } 00:14:15.983 ] 00:14:15.983 }, 00:14:15.983 { 00:14:15.983 "subsystem": "nbd", 00:14:15.983 "config": [] 00:14:15.983 }, 00:14:15.983 { 00:14:15.983 "subsystem": "nvmf", 00:14:15.983 "config": [ 00:14:15.983 { 00:14:15.983 "method": "nvmf_set_config", 00:14:15.983 "params": { 00:14:15.983 "discovery_filter": "match_any", 00:14:15.983 "admin_cmd_passthru": { 00:14:15.983 "identify_ctrlr": false 00:14:15.983 } 00:14:15.983 } 00:14:15.983 }, 00:14:15.983 { 00:14:15.983 "method": "nvmf_set_max_subsystems", 00:14:15.983 "params": { 00:14:15.983 "max_subsystems": 1024 00:14:15.983 } 00:14:15.983 }, 00:14:15.983 { 00:14:15.983 "method": "nvmf_set_crdt", 00:14:15.983 "params": { 00:14:15.983 "crdt1": 0, 00:14:15.983 "crdt2": 0, 00:14:15.983 "crdt3": 0 00:14:15.983 } 00:14:15.983 } 00:14:15.983 ] 00:14:15.983 }, 00:14:15.983 { 00:14:15.983 "subsystem": "iscsi", 00:14:15.983 "config": [ 00:14:15.983 { 00:14:15.983 "method": "iscsi_set_options", 00:14:15.983 "params": { 00:14:15.983 "node_base": "iqn.2016-06.io.spdk", 00:14:15.983 "max_sessions": 128, 00:14:15.983 "max_connections_per_session": 2, 00:14:15.983 "max_queue_depth": 64, 00:14:15.983 "default_time2wait": 2, 00:14:15.983 "default_time2retain": 20, 00:14:15.983 "first_burst_length": 8192, 00:14:15.983 "immediate_data": true, 00:14:15.983 "allow_duplicated_isid": false, 00:14:15.983 "error_recovery_level": 0, 00:14:15.983 "nop_timeout": 60, 00:14:15.983 "nop_in_interval": 30, 00:14:15.983 "disable_chap": false, 00:14:15.983 "require_chap": false, 00:14:15.983 "mutual_chap": false, 00:14:15.983 "chap_group": 0, 00:14:15.983 "max_large_datain_per_connection": 64, 00:14:15.983 "max_r2t_per_connection": 4, 00:14:15.983 "pdu_pool_size": 36864, 00:14:15.983 "immediate_data_pool_size": 16384, 00:14:15.983 "data_out_pool_size": 2048 00:14:15.983 } 00:14:15.983 } 00:14:15.983 ] 00:14:15.983 } 00:14:15.983 ] 00:14:15.983 }' 00:14:15.983 21:05:29 -- ublk/ublk.sh@116 -- # killprocess 69645 00:14:15.983 21:05:29 -- common/autotest_common.sh@926 -- # '[' -z 69645 ']' 00:14:15.983 21:05:29 -- common/autotest_common.sh@930 -- # kill -0 69645 00:14:15.983 21:05:29 -- common/autotest_common.sh@931 -- # uname 00:14:15.983 21:05:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:15.983 21:05:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69645 00:14:15.983 21:05:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:15.983 killing process with pid 69645 00:14:15.983 21:05:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:15.983 21:05:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69645' 00:14:15.983 21:05:29 -- common/autotest_common.sh@945 -- # kill 69645 00:14:15.983 21:05:29 -- common/autotest_common.sh@950 -- # wait 69645 00:14:17.358 [2024-07-13 21:05:31.101598] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:17.358 [2024-07-13 21:05:31.149063] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:17.358 [2024-07-13 21:05:31.149332] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:17.358 [2024-07-13 21:05:31.156876] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:17.358 [2024-07-13 21:05:31.156941] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:17.358 [2024-07-13 21:05:31.156954] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:17.358 [2024-07-13 21:05:31.156993] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:17.358 [2024-07-13 21:05:31.157189] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:18.780 21:05:32 -- ublk/ublk.sh@119 -- # tgtpid=69713 00:14:18.780 21:05:32 -- ublk/ublk.sh@121 -- # waitforlisten 69713 00:14:18.780 21:05:32 -- common/autotest_common.sh@819 -- # '[' -z 69713 ']' 00:14:18.780 21:05:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.780 21:05:32 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:18.780 21:05:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:18.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.780 21:05:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.780 21:05:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:18.780 21:05:32 -- ublk/ublk.sh@118 -- # echo '{ 00:14:18.780 "subsystems": [ 00:14:18.780 { 00:14:18.780 "subsystem": "iobuf", 00:14:18.780 "config": [ 00:14:18.780 { 00:14:18.780 "method": "iobuf_set_options", 00:14:18.780 "params": { 00:14:18.780 "small_pool_count": 8192, 00:14:18.780 "large_pool_count": 1024, 00:14:18.780 "small_bufsize": 8192, 00:14:18.780 "large_bufsize": 135168 00:14:18.780 } 00:14:18.780 } 00:14:18.780 ] 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "subsystem": "sock", 00:14:18.780 "config": [ 00:14:18.780 { 00:14:18.780 "method": "sock_impl_set_options", 00:14:18.780 "params": { 00:14:18.780 "impl_name": "posix", 00:14:18.780 "recv_buf_size": 2097152, 00:14:18.780 "send_buf_size": 2097152, 00:14:18.780 "enable_recv_pipe": true, 00:14:18.780 "enable_quickack": false, 00:14:18.780 "enable_placement_id": 0, 00:14:18.780 "enable_zerocopy_send_server": true, 00:14:18.780 "enable_zerocopy_send_client": false, 00:14:18.780 "zerocopy_threshold": 0, 00:14:18.780 "tls_version": 0, 00:14:18.780 "enable_ktls": false 00:14:18.780 } 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "method": "sock_impl_set_options", 00:14:18.780 "params": { 00:14:18.780 "impl_name": "ssl", 00:14:18.780 "recv_buf_size": 4096, 00:14:18.780 "send_buf_size": 4096, 00:14:18.780 "enable_recv_pipe": true, 00:14:18.780 "enable_quickack": false, 00:14:18.780 "enable_placement_id": 0, 00:14:18.780 "enable_zerocopy_send_server": true, 00:14:18.780 "enable_zerocopy_send_client": false, 00:14:18.780 "zerocopy_threshold": 0, 00:14:18.780 "tls_version": 0, 00:14:18.780 "enable_ktls": false 00:14:18.780 } 00:14:18.780 } 00:14:18.780 ] 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "subsystem": "vmd", 00:14:18.780 "config": [] 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "subsystem": "accel", 00:14:18.780 "config": [ 00:14:18.780 { 00:14:18.780 "method": "accel_set_options", 00:14:18.780 "params": { 00:14:18.780 "small_cache_size": 128, 00:14:18.780 "large_cache_size": 16, 00:14:18.780 "task_count": 2048, 00:14:18.780 "sequence_count": 2048, 00:14:18.780 "buf_count": 2048 00:14:18.780 } 00:14:18.780 } 00:14:18.780 ] 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "subsystem": "bdev", 00:14:18.780 "config": [ 00:14:18.780 { 00:14:18.780 "method": "bdev_set_options", 00:14:18.780 "params": { 00:14:18.780 "bdev_io_pool_size": 65535, 00:14:18.780 "bdev_io_cache_size": 256, 00:14:18.780 "bdev_auto_examine": true, 00:14:18.780 "iobuf_small_cache_size": 128, 00:14:18.780 "iobuf_large_cache_size": 16 00:14:18.780 } 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "method": "bdev_raid_set_options", 00:14:18.780 "params": { 00:14:18.780 "process_window_size_kb": 1024 00:14:18.780 } 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "method": "bdev_iscsi_set_options", 00:14:18.780 "params": { 00:14:18.780 "timeout_sec": 30 00:14:18.780 } 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "method": "bdev_nvme_set_options", 00:14:18.780 "params": { 00:14:18.780 "action_on_timeout": "none", 00:14:18.780 "timeout_us": 0, 00:14:18.780 "timeout_admin_us": 0, 00:14:18.780 "keep_alive_timeout_ms": 10000, 00:14:18.780 "transport_retry_count": 4, 00:14:18.780 "arbitration_burst": 0, 00:14:18.780 "low_priority_weight": 0, 00:14:18.780 "medium_priority_weight": 0, 00:14:18.780 "high_priority_weight": 0, 00:14:18.780 "nvme_adminq_poll_period_us": 10000, 00:14:18.780 "nvme_ioq_poll_period_us": 0, 00:14:18.780 "io_queue_requests": 0, 00:14:18.780 "delay_cmd_submit": true, 00:14:18.780 "bdev_retry_count": 3, 00:14:18.780 "transport_ack_timeout": 0, 00:14:18.780 "ctrlr_loss_timeout_sec": 0, 00:14:18.780 "reconnect_delay_sec": 0, 00:14:18.780 "fast_io_fail_timeout_sec": 0, 00:14:18.780 "generate_uuids": false, 00:14:18.780 "transport_tos": 0, 00:14:18.780 "io_path_stat": false, 00:14:18.780 "allow_accel_sequence": false 00:14:18.780 } 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "method": "bdev_nvme_set_hotplug", 00:14:18.780 "params": { 00:14:18.780 "period_us": 100000, 00:14:18.780 "enable": false 00:14:18.780 } 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "method": "bdev_malloc_create", 00:14:18.780 "params": { 00:14:18.780 "name": "malloc0", 00:14:18.780 "num_blocks": 8192, 00:14:18.780 "block_size": 4096, 00:14:18.780 "physical_block_size": 4096, 00:14:18.780 "uuid": "2d1cfaa5-5455-4e9a-9114-8f2bf54d96b3", 00:14:18.780 "optimal_io_boundary": 0 00:14:18.780 } 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "method": "bdev_wait_for_examine" 00:14:18.780 } 00:14:18.780 ] 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "subsystem": "scsi", 00:14:18.780 "config": null 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "subsystem": "scheduler", 00:14:18.780 "config": [ 00:14:18.780 { 00:14:18.780 "method": "framework_set_scheduler", 00:14:18.780 "params": { 00:14:18.780 "name": "static" 00:14:18.780 } 00:14:18.780 } 00:14:18.780 ] 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "subsystem": "vhost_scsi", 00:14:18.780 "config": [] 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "subsystem": "vhost_blk", 00:14:18.780 "config": [] 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "subsystem": "ublk", 00:14:18.780 "config": [ 00:14:18.780 { 00:14:18.780 "method": "ublk_create_target", 00:14:18.780 "params": { 00:14:18.780 "cpumask": "1" 00:14:18.780 } 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "method": "ublk_start_disk", 00:14:18.780 "params": { 00:14:18.780 "bdev_name": "malloc0", 00:14:18.780 "ublk_id": 0, 00:14:18.780 "num_queues": 1, 00:14:18.780 "queue_depth": 128 00:14:18.780 } 00:14:18.780 } 00:14:18.780 ] 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "subsystem": "nbd", 00:14:18.780 "config": [] 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "subsystem": "nvmf", 00:14:18.780 "config": [ 00:14:18.780 { 00:14:18.780 "method": "nvmf_set_config", 00:14:18.780 "params": { 00:14:18.780 "discovery_filter": "match_any", 00:14:18.780 "admin_cmd_passthru": { 00:14:18.780 "identify_ctrlr": false 00:14:18.780 } 00:14:18.780 } 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "method": "nvmf_set_max_subsystems", 00:14:18.780 "params": { 00:14:18.780 "max_subsystems": 1024 00:14:18.780 } 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "method": "nvmf_set_crdt", 00:14:18.780 "params": { 00:14:18.780 "crdt1": 0, 00:14:18.780 "crdt2": 0, 00:14:18.780 "crdt3": 0 00:14:18.780 } 00:14:18.780 } 00:14:18.780 ] 00:14:18.780 }, 00:14:18.780 { 00:14:18.780 "subsystem": "iscsi", 00:14:18.780 "config": [ 00:14:18.780 { 00:14:18.780 "method": "iscsi_set_options", 00:14:18.780 "params": { 00:14:18.780 "node_base": "iqn.2016-06.io.spdk", 00:14:18.780 "max_sessions": 128, 00:14:18.780 "max_connections_per_session": 2, 00:14:18.780 "max_queue_depth": 64, 00:14:18.780 "default_time2wait": 2, 00:14:18.780 "default_time2retain": 20, 00:14:18.780 "first_burst_length": 8192, 00:14:18.780 "immediate_data": true, 00:14:18.780 "allow_duplicated_isid": false, 00:14:18.780 "error_recovery_level": 0, 00:14:18.780 "nop_timeout": 60, 00:14:18.780 "nop_in_interval": 30, 00:14:18.780 "disable_chap": false, 00:14:18.780 "require_chap": false, 00:14:18.780 "mutual_chap": false, 00:14:18.780 "chap_group": 0, 00:14:18.780 "max_large_datain_per_connection": 64, 00:14:18.780 "max_r2t_per_connection": 4, 00:14:18.780 "pdu_pool_size": 36864, 00:14:18.780 "immediate_data_pool_size": 16384, 00:14:18.780 "data_out_pool_size": 2048 00:14:18.780 } 00:14:18.780 } 00:14:18.780 ] 00:14:18.780 } 00:14:18.780 ] 00:14:18.780 }' 00:14:18.780 21:05:32 -- common/autotest_common.sh@10 -- # set +x 00:14:18.781 [2024-07-13 21:05:32.480707] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:18.781 [2024-07-13 21:05:32.480929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69713 ] 00:14:18.781 [2024-07-13 21:05:32.650764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.039 [2024-07-13 21:05:32.833022] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:19.039 [2024-07-13 21:05:32.833301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.974 [2024-07-13 21:05:33.632888] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:19.974 [2024-07-13 21:05:33.640061] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:19.974 [2024-07-13 21:05:33.640153] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:19.974 [2024-07-13 21:05:33.640169] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:19.974 [2024-07-13 21:05:33.640178] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:19.974 [2024-07-13 21:05:33.649001] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:19.974 [2024-07-13 21:05:33.649028] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:19.974 [2024-07-13 21:05:33.654913] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:19.974 [2024-07-13 21:05:33.655019] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:19.974 [2024-07-13 21:05:33.674889] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:20.232 21:05:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:20.232 21:05:34 -- common/autotest_common.sh@852 -- # return 0 00:14:20.232 21:05:34 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:20.232 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.232 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:14:20.232 21:05:34 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:20.232 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.232 21:05:34 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:20.232 21:05:34 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:20.232 21:05:34 -- ublk/ublk.sh@125 -- # killprocess 69713 00:14:20.232 21:05:34 -- common/autotest_common.sh@926 -- # '[' -z 69713 ']' 00:14:20.232 21:05:34 -- common/autotest_common.sh@930 -- # kill -0 69713 00:14:20.232 21:05:34 -- common/autotest_common.sh@931 -- # uname 00:14:20.232 21:05:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:20.232 21:05:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69713 00:14:20.232 21:05:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:20.232 21:05:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:20.232 killing process with pid 69713 00:14:20.232 21:05:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69713' 00:14:20.232 21:05:34 -- common/autotest_common.sh@945 -- # kill 69713 00:14:20.232 21:05:34 -- common/autotest_common.sh@950 -- # wait 69713 00:14:21.609 [2024-07-13 21:05:35.449920] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:21.609 [2024-07-13 21:05:35.492973] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:21.609 [2024-07-13 21:05:35.493156] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:21.609 [2024-07-13 21:05:35.503027] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:21.609 [2024-07-13 21:05:35.503084] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:21.609 [2024-07-13 21:05:35.503095] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:21.609 [2024-07-13 21:05:35.503148] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:21.609 [2024-07-13 21:05:35.503379] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:22.984 21:05:36 -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:22.984 00:14:22.984 real 0m9.042s 00:14:22.984 user 0m8.228s 00:14:22.984 sys 0m2.154s 00:14:22.984 21:05:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.984 21:05:36 -- common/autotest_common.sh@10 -- # set +x 00:14:22.984 ************************************ 00:14:22.984 END TEST test_save_ublk_config 00:14:22.984 ************************************ 00:14:22.984 21:05:36 -- ublk/ublk.sh@139 -- # spdk_pid=69795 00:14:22.984 21:05:36 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:22.984 21:05:36 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:22.984 21:05:36 -- ublk/ublk.sh@141 -- # waitforlisten 69795 00:14:22.984 21:05:36 -- common/autotest_common.sh@819 -- # '[' -z 69795 ']' 00:14:22.984 21:05:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.984 21:05:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:22.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.985 21:05:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.985 21:05:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:22.985 21:05:36 -- common/autotest_common.sh@10 -- # set +x 00:14:22.985 [2024-07-13 21:05:36.770573] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:22.985 [2024-07-13 21:05:36.770753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69795 ] 00:14:23.243 [2024-07-13 21:05:36.943313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:23.243 [2024-07-13 21:05:37.116382] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:23.243 [2024-07-13 21:05:37.116800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.243 [2024-07-13 21:05:37.116826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.618 21:05:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:24.618 21:05:38 -- common/autotest_common.sh@852 -- # return 0 00:14:24.618 21:05:38 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:24.618 21:05:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:24.618 21:05:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:24.618 21:05:38 -- common/autotest_common.sh@10 -- # set +x 00:14:24.618 ************************************ 00:14:24.618 START TEST test_create_ublk 00:14:24.618 ************************************ 00:14:24.618 21:05:38 -- common/autotest_common.sh@1104 -- # test_create_ublk 00:14:24.618 21:05:38 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:24.618 21:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.618 21:05:38 -- common/autotest_common.sh@10 -- # set +x 00:14:24.618 [2024-07-13 21:05:38.489309] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:24.618 21:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.618 21:05:38 -- ublk/ublk.sh@33 -- # ublk_target= 00:14:24.618 21:05:38 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:24.618 21:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.618 21:05:38 -- common/autotest_common.sh@10 -- # set +x 00:14:24.876 21:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.876 21:05:38 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:24.876 21:05:38 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:24.876 21:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.876 21:05:38 -- common/autotest_common.sh@10 -- # set +x 00:14:24.876 [2024-07-13 21:05:38.741094] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:24.876 [2024-07-13 21:05:38.741656] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:24.876 [2024-07-13 21:05:38.741681] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:24.876 [2024-07-13 21:05:38.741696] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:24.876 [2024-07-13 21:05:38.750148] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:24.876 [2024-07-13 21:05:38.750181] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:24.876 [2024-07-13 21:05:38.756969] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:24.876 [2024-07-13 21:05:38.771184] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:24.876 [2024-07-13 21:05:38.787969] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:24.876 21:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.876 21:05:38 -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:24.876 21:05:38 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:24.876 21:05:38 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:24.876 21:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.876 21:05:38 -- common/autotest_common.sh@10 -- # set +x 00:14:25.134 21:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.135 21:05:38 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:25.135 { 00:14:25.135 "ublk_device": "/dev/ublkb0", 00:14:25.135 "id": 0, 00:14:25.135 "queue_depth": 512, 00:14:25.135 "num_queues": 4, 00:14:25.135 "bdev_name": "Malloc0" 00:14:25.135 } 00:14:25.135 ]' 00:14:25.135 21:05:38 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:25.135 21:05:38 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:25.135 21:05:38 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:25.135 21:05:38 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:25.135 21:05:38 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:25.135 21:05:38 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:25.135 21:05:38 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:25.135 21:05:39 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:25.135 21:05:39 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:25.393 21:05:39 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:25.393 21:05:39 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:25.393 21:05:39 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:25.393 21:05:39 -- lvol/common.sh@41 -- # local offset=0 00:14:25.393 21:05:39 -- lvol/common.sh@42 -- # local size=134217728 00:14:25.393 21:05:39 -- lvol/common.sh@43 -- # local rw=write 00:14:25.393 21:05:39 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:25.393 21:05:39 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:25.393 21:05:39 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:25.393 21:05:39 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:25.393 21:05:39 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:25.393 21:05:39 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:25.393 21:05:39 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:25.393 fio: verification read phase will never start because write phase uses all of runtime 00:14:25.393 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:25.393 fio-3.35 00:14:25.393 Starting 1 process 00:14:37.595 00:14:37.595 fio_test: (groupid=0, jobs=1): err= 0: pid=69850: Sat Jul 13 21:05:49 2024 00:14:37.595 write: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(454MiB/10001msec); 0 zone resets 00:14:37.595 clat (usec): min=54, max=4026, avg=84.58, stdev=130.88 00:14:37.595 lat (usec): min=54, max=4027, avg=85.32, stdev=130.90 00:14:37.595 clat percentiles (usec): 00:14:37.595 | 1.00th=[ 58], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 72], 00:14:37.595 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 76], 00:14:37.595 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 92], 95.00th=[ 101], 00:14:37.595 | 99.00th=[ 124], 99.50th=[ 149], 99.90th=[ 2769], 99.95th=[ 3163], 00:14:37.595 | 99.99th=[ 3752] 00:14:37.595 bw ( KiB/s): min=44624, max=51128, per=100.00%, avg=46525.05, stdev=1249.52, samples=19 00:14:37.595 iops : min=11156, max=12782, avg=11631.26, stdev=312.38, samples=19 00:14:37.595 lat (usec) : 100=94.46%, 250=5.16%, 500=0.05%, 750=0.02%, 1000=0.02% 00:14:37.595 lat (msec) : 2=0.11%, 4=0.18%, 10=0.01% 00:14:37.595 cpu : usr=3.30%, sys=8.49%, ctx=116195, majf=0, minf=796 00:14:37.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.595 issued rwts: total=0,116197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.595 00:14:37.595 Run status group 0 (all jobs): 00:14:37.595 WRITE: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=454MiB (476MB), run=10001-10001msec 00:14:37.595 00:14:37.595 Disk stats (read/write): 00:14:37.595 ublkb0: ios=0/114988, merge=0/0, ticks=0/8757, in_queue=8757, util=99.10% 00:14:37.595 21:05:49 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:37.595 21:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.595 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:37.595 [2024-07-13 21:05:49.303755] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:37.595 [2024-07-13 21:05:49.341906] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:37.595 [2024-07-13 21:05:49.343168] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:37.595 [2024-07-13 21:05:49.350249] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:37.595 [2024-07-13 21:05:49.350636] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:37.595 [2024-07-13 21:05:49.350661] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:37.595 21:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.595 21:05:49 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:37.595 21:05:49 -- common/autotest_common.sh@640 -- # local es=0 00:14:37.595 21:05:49 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:37.595 21:05:49 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:14:37.595 21:05:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:37.595 21:05:49 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:14:37.595 21:05:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:37.595 21:05:49 -- common/autotest_common.sh@643 -- # rpc_cmd ublk_stop_disk 0 00:14:37.595 21:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.595 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:37.595 [2024-07-13 21:05:49.365068] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:37.595 request: 00:14:37.595 { 00:14:37.595 "ublk_id": 0, 00:14:37.595 "method": "ublk_stop_disk", 00:14:37.595 "req_id": 1 00:14:37.595 } 00:14:37.595 Got JSON-RPC error response 00:14:37.596 response: 00:14:37.596 { 00:14:37.596 "code": -19, 00:14:37.596 "message": "No such device" 00:14:37.596 } 00:14:37.596 21:05:49 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:37.596 21:05:49 -- common/autotest_common.sh@643 -- # es=1 00:14:37.596 21:05:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:37.596 21:05:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:37.596 21:05:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:37.596 21:05:49 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:37.596 21:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 [2024-07-13 21:05:49.372944] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:37.596 [2024-07-13 21:05:49.379181] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:37.596 [2024-07-13 21:05:49.379238] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:37.596 21:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:49 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:37.596 21:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 21:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:49 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:37.596 21:05:49 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:37.596 21:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 21:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:49 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:37.596 21:05:49 -- lvol/common.sh@26 -- # jq length 00:14:37.596 21:05:49 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:37.596 21:05:49 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:37.596 21:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 21:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:49 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:37.596 21:05:49 -- lvol/common.sh@28 -- # jq length 00:14:37.596 21:05:49 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:37.596 00:14:37.596 real 0m11.325s 00:14:37.596 user 0m0.765s 00:14:37.596 sys 0m0.944s 00:14:37.596 21:05:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.596 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 ************************************ 00:14:37.596 END TEST test_create_ublk 00:14:37.596 ************************************ 00:14:37.596 21:05:49 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:37.596 21:05:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:37.596 21:05:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:37.596 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 ************************************ 00:14:37.596 START TEST test_create_multi_ublk 00:14:37.596 ************************************ 00:14:37.596 21:05:49 -- common/autotest_common.sh@1104 -- # test_create_multi_ublk 00:14:37.596 21:05:49 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:37.596 21:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 [2024-07-13 21:05:49.865255] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:37.596 21:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:49 -- ublk/ublk.sh@62 -- # ublk_target= 00:14:37.596 21:05:49 -- ublk/ublk.sh@64 -- # seq 0 3 00:14:37.596 21:05:49 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:37.596 21:05:49 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:37.596 21:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 21:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:50 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:37.596 21:05:50 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:37.596 21:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:50 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 [2024-07-13 21:05:50.107098] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:37.596 [2024-07-13 21:05:50.107672] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:37.596 [2024-07-13 21:05:50.107697] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:37.596 [2024-07-13 21:05:50.107710] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:37.596 [2024-07-13 21:05:50.116131] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:37.596 [2024-07-13 21:05:50.116168] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:37.596 [2024-07-13 21:05:50.122935] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:37.596 [2024-07-13 21:05:50.123661] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:37.596 [2024-07-13 21:05:50.133945] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:37.596 21:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:50 -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:37.596 21:05:50 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:37.596 21:05:50 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:37.596 21:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:50 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 21:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:50 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:37.596 21:05:50 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:37.596 21:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:50 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 [2024-07-13 21:05:50.394112] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:37.596 [2024-07-13 21:05:50.394626] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:37.596 [2024-07-13 21:05:50.394662] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:37.596 [2024-07-13 21:05:50.394673] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:37.596 [2024-07-13 21:05:50.401886] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:37.596 [2024-07-13 21:05:50.401906] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:37.596 [2024-07-13 21:05:50.411933] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:37.596 [2024-07-13 21:05:50.412687] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:37.596 [2024-07-13 21:05:50.420938] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:37.596 21:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:50 -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:37.596 21:05:50 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:37.596 21:05:50 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:37.596 21:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:50 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 21:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:50 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:37.596 21:05:50 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:37.596 21:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:50 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 [2024-07-13 21:05:50.681110] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:37.596 [2024-07-13 21:05:50.681610] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:37.596 [2024-07-13 21:05:50.681631] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:37.596 [2024-07-13 21:05:50.681647] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:37.596 [2024-07-13 21:05:50.690136] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:37.596 [2024-07-13 21:05:50.690176] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:37.596 [2024-07-13 21:05:50.695949] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:37.596 [2024-07-13 21:05:50.696675] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:37.596 [2024-07-13 21:05:50.703967] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:37.596 21:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:50 -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:37.596 21:05:50 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:37.596 21:05:50 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:37.596 21:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:50 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 21:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:50 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:37.596 21:05:50 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:37.596 21:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:50 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 [2024-07-13 21:05:50.973103] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:37.596 [2024-07-13 21:05:50.973567] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:37.596 [2024-07-13 21:05:50.973593] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:37.596 [2024-07-13 21:05:50.973604] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:37.596 [2024-07-13 21:05:50.980897] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:37.596 [2024-07-13 21:05:50.980925] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:37.596 [2024-07-13 21:05:50.987937] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:37.596 [2024-07-13 21:05:50.988662] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:37.596 [2024-07-13 21:05:50.993917] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:37.596 21:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:51 -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:37.596 21:05:51 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:37.596 21:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.596 21:05:51 -- common/autotest_common.sh@10 -- # set +x 00:14:37.596 21:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.596 21:05:51 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:37.596 { 00:14:37.596 "ublk_device": "/dev/ublkb0", 00:14:37.596 "id": 0, 00:14:37.596 "queue_depth": 512, 00:14:37.596 "num_queues": 4, 00:14:37.596 "bdev_name": "Malloc0" 00:14:37.596 }, 00:14:37.596 { 00:14:37.596 "ublk_device": "/dev/ublkb1", 00:14:37.596 "id": 1, 00:14:37.596 "queue_depth": 512, 00:14:37.596 "num_queues": 4, 00:14:37.596 "bdev_name": "Malloc1" 00:14:37.596 }, 00:14:37.596 { 00:14:37.596 "ublk_device": "/dev/ublkb2", 00:14:37.596 "id": 2, 00:14:37.596 "queue_depth": 512, 00:14:37.596 "num_queues": 4, 00:14:37.597 "bdev_name": "Malloc2" 00:14:37.597 }, 00:14:37.597 { 00:14:37.597 "ublk_device": "/dev/ublkb3", 00:14:37.597 "id": 3, 00:14:37.597 "queue_depth": 512, 00:14:37.597 "num_queues": 4, 00:14:37.597 "bdev_name": "Malloc3" 00:14:37.597 } 00:14:37.597 ]' 00:14:37.597 21:05:51 -- ublk/ublk.sh@72 -- # seq 0 3 00:14:37.597 21:05:51 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:37.597 21:05:51 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:37.597 21:05:51 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:37.597 21:05:51 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:37.597 21:05:51 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:37.597 21:05:51 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:37.597 21:05:51 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:37.597 21:05:51 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:37.597 21:05:51 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:37.597 21:05:51 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:37.597 21:05:51 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:37.597 21:05:51 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:37.597 21:05:51 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:37.597 21:05:51 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:37.597 21:05:51 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:37.597 21:05:51 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:37.597 21:05:51 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:37.597 21:05:51 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:37.597 21:05:51 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:37.597 21:05:51 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:37.597 21:05:51 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:37.855 21:05:51 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:37.855 21:05:51 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:37.855 21:05:51 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:37.855 21:05:51 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:37.855 21:05:51 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:37.855 21:05:51 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:37.855 21:05:51 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:37.855 21:05:51 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:37.856 21:05:51 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:37.856 21:05:51 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:37.856 21:05:51 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:38.114 21:05:51 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:38.114 21:05:51 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:38.114 21:05:51 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:38.114 21:05:51 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:38.114 21:05:51 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:38.114 21:05:51 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:38.114 21:05:51 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:38.114 21:05:51 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:38.114 21:05:51 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:38.114 21:05:52 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:38.114 21:05:52 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:38.386 21:05:52 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:38.386 21:05:52 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:38.386 21:05:52 -- ublk/ublk.sh@85 -- # seq 0 3 00:14:38.386 21:05:52 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:38.386 21:05:52 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:38.386 21:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.386 21:05:52 -- common/autotest_common.sh@10 -- # set +x 00:14:38.386 [2024-07-13 21:05:52.063194] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:38.386 [2024-07-13 21:05:52.094993] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:38.386 [2024-07-13 21:05:52.096451] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:38.386 [2024-07-13 21:05:52.102923] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:38.386 [2024-07-13 21:05:52.103266] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:38.386 [2024-07-13 21:05:52.103294] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:38.386 21:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.386 21:05:52 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:38.386 21:05:52 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:38.386 21:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.386 21:05:52 -- common/autotest_common.sh@10 -- # set +x 00:14:38.386 [2024-07-13 21:05:52.118018] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:38.386 [2024-07-13 21:05:52.154936] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:38.386 [2024-07-13 21:05:52.156317] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:38.387 [2024-07-13 21:05:52.163867] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:38.387 [2024-07-13 21:05:52.164219] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:38.387 [2024-07-13 21:05:52.164247] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:38.387 21:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.387 21:05:52 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:38.387 21:05:52 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:38.387 21:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.387 21:05:52 -- common/autotest_common.sh@10 -- # set +x 00:14:38.387 [2024-07-13 21:05:52.169099] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:38.387 [2024-07-13 21:05:52.201964] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:38.387 [2024-07-13 21:05:52.203220] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:38.387 [2024-07-13 21:05:52.209941] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:38.387 [2024-07-13 21:05:52.210272] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:38.387 [2024-07-13 21:05:52.210300] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:38.387 21:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.387 21:05:52 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:38.387 21:05:52 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:38.387 21:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.387 21:05:52 -- common/autotest_common.sh@10 -- # set +x 00:14:38.387 [2024-07-13 21:05:52.225966] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:38.387 [2024-07-13 21:05:52.253420] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:38.387 [2024-07-13 21:05:52.257180] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:38.387 [2024-07-13 21:05:52.263864] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:38.387 [2024-07-13 21:05:52.264211] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:38.387 [2024-07-13 21:05:52.264238] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:38.387 21:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.387 21:05:52 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:38.673 [2024-07-13 21:05:52.510047] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:38.673 [2024-07-13 21:05:52.517900] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:38.673 [2024-07-13 21:05:52.517947] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:38.673 21:05:52 -- ublk/ublk.sh@93 -- # seq 0 3 00:14:38.673 21:05:52 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:38.673 21:05:52 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:38.673 21:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.673 21:05:52 -- common/autotest_common.sh@10 -- # set +x 00:14:38.932 21:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.932 21:05:52 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:38.932 21:05:52 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:38.932 21:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.932 21:05:52 -- common/autotest_common.sh@10 -- # set +x 00:14:39.500 21:05:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.500 21:05:53 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:39.500 21:05:53 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:39.500 21:05:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.500 21:05:53 -- common/autotest_common.sh@10 -- # set +x 00:14:39.760 21:05:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.760 21:05:53 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:39.760 21:05:53 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:39.760 21:05:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.760 21:05:53 -- common/autotest_common.sh@10 -- # set +x 00:14:40.019 21:05:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.019 21:05:53 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:40.019 21:05:53 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:40.019 21:05:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.019 21:05:53 -- common/autotest_common.sh@10 -- # set +x 00:14:40.019 21:05:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.019 21:05:53 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:40.019 21:05:53 -- lvol/common.sh@26 -- # jq length 00:14:40.019 21:05:53 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:40.019 21:05:53 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:40.019 21:05:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.019 21:05:53 -- common/autotest_common.sh@10 -- # set +x 00:14:40.019 21:05:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.019 21:05:53 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:40.019 21:05:53 -- lvol/common.sh@28 -- # jq length 00:14:40.019 21:05:53 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:40.019 00:14:40.019 real 0m4.022s 00:14:40.019 user 0m1.287s 00:14:40.019 sys 0m0.175s 00:14:40.019 21:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.019 ************************************ 00:14:40.019 21:05:53 -- common/autotest_common.sh@10 -- # set +x 00:14:40.019 END TEST test_create_multi_ublk 00:14:40.019 ************************************ 00:14:40.019 21:05:53 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:40.019 21:05:53 -- ublk/ublk.sh@147 -- # cleanup 00:14:40.019 21:05:53 -- ublk/ublk.sh@130 -- # killprocess 69795 00:14:40.019 21:05:53 -- common/autotest_common.sh@926 -- # '[' -z 69795 ']' 00:14:40.019 21:05:53 -- common/autotest_common.sh@930 -- # kill -0 69795 00:14:40.020 21:05:53 -- common/autotest_common.sh@931 -- # uname 00:14:40.020 21:05:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:40.020 21:05:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69795 00:14:40.279 21:05:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:40.279 21:05:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:40.279 killing process with pid 69795 00:14:40.279 21:05:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69795' 00:14:40.279 21:05:53 -- common/autotest_common.sh@945 -- # kill 69795 00:14:40.279 21:05:53 -- common/autotest_common.sh@950 -- # wait 69795 00:14:41.215 [2024-07-13 21:05:54.888384] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:41.215 [2024-07-13 21:05:54.888464] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:42.152 00:14:42.152 real 0m28.423s 00:14:42.152 user 0m43.342s 00:14:42.152 sys 0m8.571s 00:14:42.152 ************************************ 00:14:42.152 END TEST ublk 00:14:42.152 ************************************ 00:14:42.152 21:05:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.152 21:05:55 -- common/autotest_common.sh@10 -- # set +x 00:14:42.152 21:05:55 -- spdk/autotest.sh@260 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:42.152 21:05:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:42.152 21:05:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:42.152 21:05:55 -- common/autotest_common.sh@10 -- # set +x 00:14:42.152 ************************************ 00:14:42.152 START TEST ublk_recovery 00:14:42.152 ************************************ 00:14:42.152 21:05:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:42.152 * Looking for test storage... 00:14:42.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:42.152 21:05:56 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:42.152 21:05:56 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:42.152 21:05:56 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:42.152 21:05:56 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:42.152 21:05:56 -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:42.152 21:05:56 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:42.152 21:05:56 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:42.152 21:05:56 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:42.152 21:05:56 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:42.152 21:05:56 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:42.152 21:05:56 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=70188 00:14:42.152 21:05:56 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.152 21:05:56 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:42.152 21:05:56 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 70188 00:14:42.152 21:05:56 -- common/autotest_common.sh@819 -- # '[' -z 70188 ']' 00:14:42.152 21:05:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.152 21:05:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:42.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.152 21:05:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.152 21:05:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:42.152 21:05:56 -- common/autotest_common.sh@10 -- # set +x 00:14:42.411 [2024-07-13 21:05:56.137500] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:42.412 [2024-07-13 21:05:56.137665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70188 ] 00:14:42.412 [2024-07-13 21:05:56.303878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:42.670 [2024-07-13 21:05:56.470043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:42.671 [2024-07-13 21:05:56.470419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.671 [2024-07-13 21:05:56.470437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.049 21:05:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:44.049 21:05:57 -- common/autotest_common.sh@852 -- # return 0 00:14:44.049 21:05:57 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:44.049 21:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.049 21:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:44.049 [2024-07-13 21:05:57.735235] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:44.049 21:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.049 21:05:57 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:44.049 21:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.049 21:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:44.049 malloc0 00:14:44.049 21:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.049 21:05:57 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:44.049 21:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.049 21:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:44.049 [2024-07-13 21:05:57.857130] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:44.049 [2024-07-13 21:05:57.857296] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:44.049 [2024-07-13 21:05:57.857311] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:44.049 [2024-07-13 21:05:57.857337] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:44.049 [2024-07-13 21:05:57.866050] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:44.049 [2024-07-13 21:05:57.866080] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:44.049 [2024-07-13 21:05:57.875944] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:44.049 [2024-07-13 21:05:57.876115] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:44.049 [2024-07-13 21:05:57.891902] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:44.049 1 00:14:44.049 21:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.049 21:05:57 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:44.985 21:05:58 -- ublk/ublk_recovery.sh@31 -- # fio_proc=70231 00:14:44.985 21:05:58 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:44.985 21:05:58 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:45.244 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:45.244 fio-3.35 00:14:45.244 Starting 1 process 00:14:50.516 21:06:03 -- ublk/ublk_recovery.sh@36 -- # kill -9 70188 00:14:50.516 21:06:03 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:55.786 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 70188 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:55.786 21:06:08 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=70342 00:14:55.786 21:06:08 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:55.786 21:06:08 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:55.786 21:06:08 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 70342 00:14:55.786 21:06:08 -- common/autotest_common.sh@819 -- # '[' -z 70342 ']' 00:14:55.786 21:06:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.786 21:06:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:55.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.786 21:06:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.786 21:06:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:55.786 21:06:08 -- common/autotest_common.sh@10 -- # set +x 00:14:55.786 [2024-07-13 21:06:09.023964] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:55.786 [2024-07-13 21:06:09.024119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70342 ] 00:14:55.786 [2024-07-13 21:06:09.192760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:55.786 [2024-07-13 21:06:09.365268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:55.786 [2024-07-13 21:06:09.365811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.786 [2024-07-13 21:06:09.365829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.720 21:06:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:56.720 21:06:10 -- common/autotest_common.sh@852 -- # return 0 00:14:56.720 21:06:10 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:56.720 21:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.720 21:06:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.979 [2024-07-13 21:06:10.644377] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:56.979 21:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.979 21:06:10 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:56.979 21:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.979 21:06:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.979 malloc0 00:14:56.979 21:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.979 21:06:10 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:56.979 21:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.979 21:06:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.979 [2024-07-13 21:06:10.770130] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:56.979 [2024-07-13 21:06:10.770189] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:56.979 [2024-07-13 21:06:10.770203] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:56.979 [2024-07-13 21:06:10.777036] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:56.979 [2024-07-13 21:06:10.777079] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:56.979 [2024-07-13 21:06:10.777194] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:56.979 1 00:14:56.979 21:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.979 21:06:10 -- ublk/ublk_recovery.sh@52 -- # wait 70231 00:15:23.590 [2024-07-13 21:06:34.401925] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:23.590 [2024-07-13 21:06:34.407768] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:23.590 [2024-07-13 21:06:34.413260] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:23.590 [2024-07-13 21:06:34.413328] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:45.541 00:15:45.541 fio_test: (groupid=0, jobs=1): err= 0: pid=70234: Sat Jul 13 21:06:59 2024 00:15:45.541 read: IOPS=10.5k, BW=41.0MiB/s (43.0MB/s)(2460MiB/60001msec) 00:15:45.541 slat (usec): min=2, max=247, avg= 6.10, stdev= 2.84 00:15:45.541 clat (usec): min=1111, max=30518k, avg=5799.51, stdev=295333.45 00:15:45.541 lat (usec): min=1119, max=30518k, avg=5805.61, stdev=295333.46 00:15:45.541 clat percentiles (usec): 00:15:45.541 | 1.00th=[ 2343], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2606], 00:15:45.541 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2835], 00:15:45.541 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3195], 95.00th=[ 4293], 00:15:45.541 | 99.00th=[ 6325], 99.50th=[ 6849], 99.90th=[ 8291], 99.95th=[ 9372], 00:15:45.541 | 99.99th=[13304] 00:15:45.541 bw ( KiB/s): min=20016, max=92712, per=100.00%, avg=84040.27, stdev=13360.22, samples=59 00:15:45.541 iops : min= 5004, max=23178, avg=21010.07, stdev=3340.06, samples=59 00:15:45.541 write: IOPS=10.5k, BW=41.0MiB/s (43.0MB/s)(2458MiB/60001msec); 0 zone resets 00:15:45.541 slat (usec): min=2, max=226, avg= 6.26, stdev= 3.00 00:15:45.541 clat (usec): min=1148, max=30519k, avg=6386.13, stdev=319501.67 00:15:45.541 lat (usec): min=1156, max=30519k, avg=6392.39, stdev=319501.68 00:15:45.541 clat percentiles (msec): 00:15:45.541 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:15:45.541 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:15:45.541 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:15:45.541 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:15:45.541 | 99.99th=[17113] 00:15:45.541 bw ( KiB/s): min=19736, max=92336, per=100.00%, avg=83942.63, stdev=13279.90, samples=59 00:15:45.541 iops : min= 4934, max=23084, avg=20985.64, stdev=3319.97, samples=59 00:15:45.541 lat (msec) : 2=0.17%, 4=94.33%, 10=5.46%, 20=0.03%, >=2000=0.01% 00:15:45.541 cpu : usr=5.58%, sys=11.94%, ctx=38977, majf=0, minf=13 00:15:45.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:45.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.541 issued rwts: total=629784,629312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.541 00:15:45.541 Run status group 0 (all jobs): 00:15:45.541 READ: bw=41.0MiB/s (43.0MB/s), 41.0MiB/s-41.0MiB/s (43.0MB/s-43.0MB/s), io=2460MiB (2580MB), run=60001-60001msec 00:15:45.541 WRITE: bw=41.0MiB/s (43.0MB/s), 41.0MiB/s-41.0MiB/s (43.0MB/s-43.0MB/s), io=2458MiB (2578MB), run=60001-60001msec 00:15:45.541 00:15:45.541 Disk stats (read/write): 00:15:45.541 ublkb1: ios=627292/626756, merge=0/0, ticks=3592298/3891011, in_queue=7483309, util=99.92% 00:15:45.541 21:06:59 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:45.541 21:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.541 21:06:59 -- common/autotest_common.sh@10 -- # set +x 00:15:45.541 [2024-07-13 21:06:59.156758] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:45.541 [2024-07-13 21:06:59.196025] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:45.541 [2024-07-13 21:06:59.196321] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:45.541 [2024-07-13 21:06:59.205009] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:45.541 [2024-07-13 21:06:59.205162] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:45.541 [2024-07-13 21:06:59.205185] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:45.541 21:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.541 21:06:59 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:45.541 21:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.541 21:06:59 -- common/autotest_common.sh@10 -- # set +x 00:15:45.541 [2024-07-13 21:06:59.221944] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:45.541 [2024-07-13 21:06:59.228980] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:45.541 [2024-07-13 21:06:59.229037] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:45.541 21:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.541 21:06:59 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:45.541 21:06:59 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:45.541 21:06:59 -- ublk/ublk_recovery.sh@14 -- # killprocess 70342 00:15:45.541 21:06:59 -- common/autotest_common.sh@926 -- # '[' -z 70342 ']' 00:15:45.541 21:06:59 -- common/autotest_common.sh@930 -- # kill -0 70342 00:15:45.541 21:06:59 -- common/autotest_common.sh@931 -- # uname 00:15:45.541 21:06:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:45.541 21:06:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70342 00:15:45.541 killing process with pid 70342 00:15:45.541 21:06:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:45.541 21:06:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:45.541 21:06:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70342' 00:15:45.541 21:06:59 -- common/autotest_common.sh@945 -- # kill 70342 00:15:45.541 21:06:59 -- common/autotest_common.sh@950 -- # wait 70342 00:15:46.479 [2024-07-13 21:07:00.148979] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:46.479 [2024-07-13 21:07:00.149056] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:47.416 00:15:47.416 real 1m5.323s 00:15:47.416 user 1m51.958s 00:15:47.416 sys 0m18.716s 00:15:47.416 21:07:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.416 21:07:01 -- common/autotest_common.sh@10 -- # set +x 00:15:47.416 ************************************ 00:15:47.416 END TEST ublk_recovery 00:15:47.416 ************************************ 00:15:47.416 21:07:01 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:15:47.416 21:07:01 -- spdk/autotest.sh@268 -- # timing_exit lib 00:15:47.416 21:07:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:47.416 21:07:01 -- common/autotest_common.sh@10 -- # set +x 00:15:47.675 21:07:01 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:15:47.675 21:07:01 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:15:47.675 21:07:01 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:15:47.675 21:07:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:47.675 21:07:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:47.675 21:07:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:47.675 21:07:01 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:15:47.675 21:07:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:15:47.675 21:07:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:15:47.675 21:07:01 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:15:47.675 21:07:01 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:47.675 21:07:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:47.675 21:07:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:47.675 21:07:01 -- common/autotest_common.sh@10 -- # set +x 00:15:47.675 ************************************ 00:15:47.675 START TEST ftl 00:15:47.675 ************************************ 00:15:47.675 21:07:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:47.675 * Looking for test storage... 00:15:47.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:47.675 21:07:01 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:47.675 21:07:01 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:47.675 21:07:01 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:47.675 21:07:01 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:47.675 21:07:01 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:47.675 21:07:01 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:47.675 21:07:01 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.675 21:07:01 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:47.675 21:07:01 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:47.675 21:07:01 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.675 21:07:01 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.675 21:07:01 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:47.675 21:07:01 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:47.675 21:07:01 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:47.675 21:07:01 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:47.675 21:07:01 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:47.675 21:07:01 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:47.675 21:07:01 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.675 21:07:01 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.675 21:07:01 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:47.675 21:07:01 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:47.675 21:07:01 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:47.675 21:07:01 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:47.675 21:07:01 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:47.675 21:07:01 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:47.675 21:07:01 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:47.675 21:07:01 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:47.675 21:07:01 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:47.675 21:07:01 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:47.675 21:07:01 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.675 21:07:01 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:47.675 21:07:01 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:47.675 21:07:01 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:47.675 21:07:01 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:47.675 21:07:01 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:48.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:48.256 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:48.256 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:48.256 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:48.256 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:48.256 21:07:02 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=71134 00:15:48.256 21:07:02 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:48.256 21:07:02 -- ftl/ftl.sh@38 -- # waitforlisten 71134 00:15:48.256 21:07:02 -- common/autotest_common.sh@819 -- # '[' -z 71134 ']' 00:15:48.256 21:07:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.256 21:07:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:48.256 21:07:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.256 21:07:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:48.256 21:07:02 -- common/autotest_common.sh@10 -- # set +x 00:15:48.256 [2024-07-13 21:07:02.114030] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:48.256 [2024-07-13 21:07:02.114197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71134 ] 00:15:48.566 [2024-07-13 21:07:02.283703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.566 [2024-07-13 21:07:02.451237] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:48.566 [2024-07-13 21:07:02.451488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.160 21:07:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:49.161 21:07:03 -- common/autotest_common.sh@852 -- # return 0 00:15:49.161 21:07:03 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:49.426 21:07:03 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:50.363 21:07:04 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:50.363 21:07:04 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:50.931 21:07:04 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:50.931 21:07:04 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:50.931 21:07:04 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:51.190 21:07:04 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:15:51.190 21:07:04 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:51.190 21:07:04 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:15:51.190 21:07:04 -- ftl/ftl.sh@50 -- # break 00:15:51.190 21:07:04 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:15:51.190 21:07:04 -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:51.190 21:07:04 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:51.190 21:07:04 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:51.190 21:07:05 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:15:51.190 21:07:05 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:51.190 21:07:05 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:15:51.190 21:07:05 -- ftl/ftl.sh@63 -- # break 00:15:51.190 21:07:05 -- ftl/ftl.sh@66 -- # killprocess 71134 00:15:51.190 21:07:05 -- common/autotest_common.sh@926 -- # '[' -z 71134 ']' 00:15:51.190 21:07:05 -- common/autotest_common.sh@930 -- # kill -0 71134 00:15:51.190 21:07:05 -- common/autotest_common.sh@931 -- # uname 00:15:51.448 21:07:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:51.448 21:07:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71134 00:15:51.448 21:07:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:51.449 21:07:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:51.449 killing process with pid 71134 00:15:51.449 21:07:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71134' 00:15:51.449 21:07:05 -- common/autotest_common.sh@945 -- # kill 71134 00:15:51.449 21:07:05 -- common/autotest_common.sh@950 -- # wait 71134 00:15:53.355 21:07:06 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:15:53.355 21:07:06 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:15:53.355 21:07:06 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:15:53.355 21:07:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:53.355 21:07:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:53.355 21:07:06 -- common/autotest_common.sh@10 -- # set +x 00:15:53.355 ************************************ 00:15:53.355 START TEST ftl_fio_basic 00:15:53.355 ************************************ 00:15:53.355 21:07:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:15:53.355 * Looking for test storage... 00:15:53.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:53.355 21:07:07 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:53.355 21:07:07 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:53.355 21:07:07 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:53.355 21:07:07 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:53.355 21:07:07 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:53.355 21:07:07 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:53.355 21:07:07 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.355 21:07:07 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:53.355 21:07:07 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:53.355 21:07:07 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:53.355 21:07:07 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:53.355 21:07:07 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:53.355 21:07:07 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:53.355 21:07:07 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:53.355 21:07:07 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:53.355 21:07:07 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:53.355 21:07:07 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:53.355 21:07:07 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:53.355 21:07:07 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:53.355 21:07:07 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:53.355 21:07:07 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:53.355 21:07:07 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:53.355 21:07:07 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:53.355 21:07:07 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:53.355 21:07:07 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:53.355 21:07:07 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:53.355 21:07:07 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:53.355 21:07:07 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:53.355 21:07:07 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:53.355 21:07:07 -- ftl/fio.sh@11 -- # declare -A suite 00:15:53.355 21:07:07 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:53.355 21:07:07 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:53.355 21:07:07 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:53.355 21:07:07 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.355 21:07:07 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:15:53.355 21:07:07 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:15:53.355 21:07:07 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:53.355 21:07:07 -- ftl/fio.sh@26 -- # uuid= 00:15:53.355 21:07:07 -- ftl/fio.sh@27 -- # timeout=240 00:15:53.355 21:07:07 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:53.355 21:07:07 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:53.355 21:07:07 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:53.355 21:07:07 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:53.355 21:07:07 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:53.355 21:07:07 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:53.355 21:07:07 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:53.355 21:07:07 -- ftl/fio.sh@45 -- # svcpid=71267 00:15:53.355 21:07:07 -- ftl/fio.sh@46 -- # waitforlisten 71267 00:15:53.355 21:07:07 -- common/autotest_common.sh@819 -- # '[' -z 71267 ']' 00:15:53.355 21:07:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.355 21:07:07 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:53.355 21:07:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:53.355 21:07:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.355 21:07:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:53.355 21:07:07 -- common/autotest_common.sh@10 -- # set +x 00:15:53.355 [2024-07-13 21:07:07.139820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:53.355 [2024-07-13 21:07:07.140034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71267 ] 00:15:53.615 [2024-07-13 21:07:07.309478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:53.615 [2024-07-13 21:07:07.470073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:53.615 [2024-07-13 21:07:07.470444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.615 [2024-07-13 21:07:07.470596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.615 [2024-07-13 21:07:07.470614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.992 21:07:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:54.992 21:07:08 -- common/autotest_common.sh@852 -- # return 0 00:15:54.992 21:07:08 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:54.992 21:07:08 -- ftl/common.sh@54 -- # local name=nvme0 00:15:54.992 21:07:08 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:54.992 21:07:08 -- ftl/common.sh@56 -- # local size=103424 00:15:54.992 21:07:08 -- ftl/common.sh@59 -- # local base_bdev 00:15:54.992 21:07:08 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:55.251 21:07:09 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:55.251 21:07:09 -- ftl/common.sh@62 -- # local base_size 00:15:55.251 21:07:09 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:55.251 21:07:09 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:15:55.251 21:07:09 -- common/autotest_common.sh@1358 -- # local bdev_info 00:15:55.251 21:07:09 -- common/autotest_common.sh@1359 -- # local bs 00:15:55.251 21:07:09 -- common/autotest_common.sh@1360 -- # local nb 00:15:55.251 21:07:09 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:55.510 21:07:09 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:15:55.510 { 00:15:55.510 "name": "nvme0n1", 00:15:55.510 "aliases": [ 00:15:55.510 "4d7cc528-04e9-4345-81ce-feb4abe79b70" 00:15:55.510 ], 00:15:55.510 "product_name": "NVMe disk", 00:15:55.510 "block_size": 4096, 00:15:55.510 "num_blocks": 1310720, 00:15:55.510 "uuid": "4d7cc528-04e9-4345-81ce-feb4abe79b70", 00:15:55.510 "assigned_rate_limits": { 00:15:55.510 "rw_ios_per_sec": 0, 00:15:55.510 "rw_mbytes_per_sec": 0, 00:15:55.510 "r_mbytes_per_sec": 0, 00:15:55.510 "w_mbytes_per_sec": 0 00:15:55.510 }, 00:15:55.510 "claimed": false, 00:15:55.510 "zoned": false, 00:15:55.510 "supported_io_types": { 00:15:55.510 "read": true, 00:15:55.510 "write": true, 00:15:55.510 "unmap": true, 00:15:55.510 "write_zeroes": true, 00:15:55.510 "flush": true, 00:15:55.510 "reset": true, 00:15:55.510 "compare": true, 00:15:55.510 "compare_and_write": false, 00:15:55.510 "abort": true, 00:15:55.510 "nvme_admin": true, 00:15:55.510 "nvme_io": true 00:15:55.510 }, 00:15:55.510 "driver_specific": { 00:15:55.510 "nvme": [ 00:15:55.510 { 00:15:55.510 "pci_address": "0000:00:07.0", 00:15:55.510 "trid": { 00:15:55.510 "trtype": "PCIe", 00:15:55.510 "traddr": "0000:00:07.0" 00:15:55.510 }, 00:15:55.510 "ctrlr_data": { 00:15:55.510 "cntlid": 0, 00:15:55.510 "vendor_id": "0x1b36", 00:15:55.510 "model_number": "QEMU NVMe Ctrl", 00:15:55.510 "serial_number": "12341", 00:15:55.510 "firmware_revision": "8.0.0", 00:15:55.510 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:55.510 "oacs": { 00:15:55.510 "security": 0, 00:15:55.510 "format": 1, 00:15:55.510 "firmware": 0, 00:15:55.510 "ns_manage": 1 00:15:55.510 }, 00:15:55.510 "multi_ctrlr": false, 00:15:55.510 "ana_reporting": false 00:15:55.510 }, 00:15:55.510 "vs": { 00:15:55.510 "nvme_version": "1.4" 00:15:55.510 }, 00:15:55.510 "ns_data": { 00:15:55.510 "id": 1, 00:15:55.510 "can_share": false 00:15:55.510 } 00:15:55.510 } 00:15:55.510 ], 00:15:55.510 "mp_policy": "active_passive" 00:15:55.510 } 00:15:55.510 } 00:15:55.510 ]' 00:15:55.510 21:07:09 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:15:55.510 21:07:09 -- common/autotest_common.sh@1362 -- # bs=4096 00:15:55.511 21:07:09 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:15:55.511 21:07:09 -- common/autotest_common.sh@1363 -- # nb=1310720 00:15:55.511 21:07:09 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:15:55.511 21:07:09 -- common/autotest_common.sh@1367 -- # echo 5120 00:15:55.511 21:07:09 -- ftl/common.sh@63 -- # base_size=5120 00:15:55.511 21:07:09 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:55.511 21:07:09 -- ftl/common.sh@67 -- # clear_lvols 00:15:55.511 21:07:09 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:55.511 21:07:09 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:55.770 21:07:09 -- ftl/common.sh@28 -- # stores= 00:15:55.770 21:07:09 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:56.029 21:07:09 -- ftl/common.sh@68 -- # lvs=eea5342c-2435-4c33-b71e-fc2292ccb52c 00:15:56.029 21:07:09 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u eea5342c-2435-4c33-b71e-fc2292ccb52c 00:15:56.287 21:07:10 -- ftl/fio.sh@48 -- # split_bdev=f928e3ae-9dc5-4139-8384-f5e81c9e2895 00:15:56.287 21:07:10 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 f928e3ae-9dc5-4139-8384-f5e81c9e2895 00:15:56.287 21:07:10 -- ftl/common.sh@35 -- # local name=nvc0 00:15:56.287 21:07:10 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:56.287 21:07:10 -- ftl/common.sh@37 -- # local base_bdev=f928e3ae-9dc5-4139-8384-f5e81c9e2895 00:15:56.287 21:07:10 -- ftl/common.sh@38 -- # local cache_size= 00:15:56.287 21:07:10 -- ftl/common.sh@41 -- # get_bdev_size f928e3ae-9dc5-4139-8384-f5e81c9e2895 00:15:56.287 21:07:10 -- common/autotest_common.sh@1357 -- # local bdev_name=f928e3ae-9dc5-4139-8384-f5e81c9e2895 00:15:56.287 21:07:10 -- common/autotest_common.sh@1358 -- # local bdev_info 00:15:56.288 21:07:10 -- common/autotest_common.sh@1359 -- # local bs 00:15:56.288 21:07:10 -- common/autotest_common.sh@1360 -- # local nb 00:15:56.288 21:07:10 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f928e3ae-9dc5-4139-8384-f5e81c9e2895 00:15:56.546 21:07:10 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:15:56.546 { 00:15:56.546 "name": "f928e3ae-9dc5-4139-8384-f5e81c9e2895", 00:15:56.546 "aliases": [ 00:15:56.546 "lvs/nvme0n1p0" 00:15:56.547 ], 00:15:56.547 "product_name": "Logical Volume", 00:15:56.547 "block_size": 4096, 00:15:56.547 "num_blocks": 26476544, 00:15:56.547 "uuid": "f928e3ae-9dc5-4139-8384-f5e81c9e2895", 00:15:56.547 "assigned_rate_limits": { 00:15:56.547 "rw_ios_per_sec": 0, 00:15:56.547 "rw_mbytes_per_sec": 0, 00:15:56.547 "r_mbytes_per_sec": 0, 00:15:56.547 "w_mbytes_per_sec": 0 00:15:56.547 }, 00:15:56.547 "claimed": false, 00:15:56.547 "zoned": false, 00:15:56.547 "supported_io_types": { 00:15:56.547 "read": true, 00:15:56.547 "write": true, 00:15:56.547 "unmap": true, 00:15:56.547 "write_zeroes": true, 00:15:56.547 "flush": false, 00:15:56.547 "reset": true, 00:15:56.547 "compare": false, 00:15:56.547 "compare_and_write": false, 00:15:56.547 "abort": false, 00:15:56.547 "nvme_admin": false, 00:15:56.547 "nvme_io": false 00:15:56.547 }, 00:15:56.547 "driver_specific": { 00:15:56.547 "lvol": { 00:15:56.547 "lvol_store_uuid": "eea5342c-2435-4c33-b71e-fc2292ccb52c", 00:15:56.547 "base_bdev": "nvme0n1", 00:15:56.547 "thin_provision": true, 00:15:56.547 "snapshot": false, 00:15:56.547 "clone": false, 00:15:56.547 "esnap_clone": false 00:15:56.547 } 00:15:56.547 } 00:15:56.547 } 00:15:56.547 ]' 00:15:56.547 21:07:10 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:15:56.547 21:07:10 -- common/autotest_common.sh@1362 -- # bs=4096 00:15:56.547 21:07:10 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:15:56.547 21:07:10 -- common/autotest_common.sh@1363 -- # nb=26476544 00:15:56.547 21:07:10 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:15:56.547 21:07:10 -- common/autotest_common.sh@1367 -- # echo 103424 00:15:56.547 21:07:10 -- ftl/common.sh@41 -- # local base_size=5171 00:15:56.547 21:07:10 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:56.547 21:07:10 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:56.805 21:07:10 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:56.805 21:07:10 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:56.805 21:07:10 -- ftl/common.sh@48 -- # get_bdev_size f928e3ae-9dc5-4139-8384-f5e81c9e2895 00:15:56.805 21:07:10 -- common/autotest_common.sh@1357 -- # local bdev_name=f928e3ae-9dc5-4139-8384-f5e81c9e2895 00:15:56.805 21:07:10 -- common/autotest_common.sh@1358 -- # local bdev_info 00:15:56.805 21:07:10 -- common/autotest_common.sh@1359 -- # local bs 00:15:56.805 21:07:10 -- common/autotest_common.sh@1360 -- # local nb 00:15:56.805 21:07:10 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f928e3ae-9dc5-4139-8384-f5e81c9e2895 00:15:57.063 21:07:10 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:15:57.064 { 00:15:57.064 "name": "f928e3ae-9dc5-4139-8384-f5e81c9e2895", 00:15:57.064 "aliases": [ 00:15:57.064 "lvs/nvme0n1p0" 00:15:57.064 ], 00:15:57.064 "product_name": "Logical Volume", 00:15:57.064 "block_size": 4096, 00:15:57.064 "num_blocks": 26476544, 00:15:57.064 "uuid": "f928e3ae-9dc5-4139-8384-f5e81c9e2895", 00:15:57.064 "assigned_rate_limits": { 00:15:57.064 "rw_ios_per_sec": 0, 00:15:57.064 "rw_mbytes_per_sec": 0, 00:15:57.064 "r_mbytes_per_sec": 0, 00:15:57.064 "w_mbytes_per_sec": 0 00:15:57.064 }, 00:15:57.064 "claimed": false, 00:15:57.064 "zoned": false, 00:15:57.064 "supported_io_types": { 00:15:57.064 "read": true, 00:15:57.064 "write": true, 00:15:57.064 "unmap": true, 00:15:57.064 "write_zeroes": true, 00:15:57.064 "flush": false, 00:15:57.064 "reset": true, 00:15:57.064 "compare": false, 00:15:57.064 "compare_and_write": false, 00:15:57.064 "abort": false, 00:15:57.064 "nvme_admin": false, 00:15:57.064 "nvme_io": false 00:15:57.064 }, 00:15:57.064 "driver_specific": { 00:15:57.064 "lvol": { 00:15:57.064 "lvol_store_uuid": "eea5342c-2435-4c33-b71e-fc2292ccb52c", 00:15:57.064 "base_bdev": "nvme0n1", 00:15:57.064 "thin_provision": true, 00:15:57.064 "snapshot": false, 00:15:57.064 "clone": false, 00:15:57.064 "esnap_clone": false 00:15:57.064 } 00:15:57.064 } 00:15:57.064 } 00:15:57.064 ]' 00:15:57.064 21:07:10 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:15:57.064 21:07:10 -- common/autotest_common.sh@1362 -- # bs=4096 00:15:57.064 21:07:10 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:15:57.323 21:07:11 -- common/autotest_common.sh@1363 -- # nb=26476544 00:15:57.323 21:07:11 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:15:57.323 21:07:11 -- common/autotest_common.sh@1367 -- # echo 103424 00:15:57.323 21:07:11 -- ftl/common.sh@48 -- # cache_size=5171 00:15:57.323 21:07:11 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:57.323 21:07:11 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:57.323 21:07:11 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:57.323 21:07:11 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:57.323 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:57.323 21:07:11 -- ftl/fio.sh@56 -- # get_bdev_size f928e3ae-9dc5-4139-8384-f5e81c9e2895 00:15:57.323 21:07:11 -- common/autotest_common.sh@1357 -- # local bdev_name=f928e3ae-9dc5-4139-8384-f5e81c9e2895 00:15:57.323 21:07:11 -- common/autotest_common.sh@1358 -- # local bdev_info 00:15:57.323 21:07:11 -- common/autotest_common.sh@1359 -- # local bs 00:15:57.323 21:07:11 -- common/autotest_common.sh@1360 -- # local nb 00:15:57.323 21:07:11 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f928e3ae-9dc5-4139-8384-f5e81c9e2895 00:15:57.581 21:07:11 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:15:57.581 { 00:15:57.581 "name": "f928e3ae-9dc5-4139-8384-f5e81c9e2895", 00:15:57.581 "aliases": [ 00:15:57.581 "lvs/nvme0n1p0" 00:15:57.581 ], 00:15:57.581 "product_name": "Logical Volume", 00:15:57.581 "block_size": 4096, 00:15:57.581 "num_blocks": 26476544, 00:15:57.581 "uuid": "f928e3ae-9dc5-4139-8384-f5e81c9e2895", 00:15:57.581 "assigned_rate_limits": { 00:15:57.581 "rw_ios_per_sec": 0, 00:15:57.581 "rw_mbytes_per_sec": 0, 00:15:57.581 "r_mbytes_per_sec": 0, 00:15:57.581 "w_mbytes_per_sec": 0 00:15:57.581 }, 00:15:57.581 "claimed": false, 00:15:57.581 "zoned": false, 00:15:57.581 "supported_io_types": { 00:15:57.581 "read": true, 00:15:57.581 "write": true, 00:15:57.581 "unmap": true, 00:15:57.581 "write_zeroes": true, 00:15:57.581 "flush": false, 00:15:57.581 "reset": true, 00:15:57.581 "compare": false, 00:15:57.581 "compare_and_write": false, 00:15:57.581 "abort": false, 00:15:57.581 "nvme_admin": false, 00:15:57.581 "nvme_io": false 00:15:57.581 }, 00:15:57.581 "driver_specific": { 00:15:57.581 "lvol": { 00:15:57.581 "lvol_store_uuid": "eea5342c-2435-4c33-b71e-fc2292ccb52c", 00:15:57.581 "base_bdev": "nvme0n1", 00:15:57.581 "thin_provision": true, 00:15:57.581 "snapshot": false, 00:15:57.581 "clone": false, 00:15:57.581 "esnap_clone": false 00:15:57.581 } 00:15:57.581 } 00:15:57.581 } 00:15:57.581 ]' 00:15:57.581 21:07:11 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:15:57.839 21:07:11 -- common/autotest_common.sh@1362 -- # bs=4096 00:15:57.839 21:07:11 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:15:57.839 21:07:11 -- common/autotest_common.sh@1363 -- # nb=26476544 00:15:57.839 21:07:11 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:15:57.839 21:07:11 -- common/autotest_common.sh@1367 -- # echo 103424 00:15:57.839 21:07:11 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:57.839 21:07:11 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:57.839 21:07:11 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f928e3ae-9dc5-4139-8384-f5e81c9e2895 -c nvc0n1p0 --l2p_dram_limit 60 00:15:58.098 [2024-07-13 21:07:11.835264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.098 [2024-07-13 21:07:11.835338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:58.098 [2024-07-13 21:07:11.835378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:58.098 [2024-07-13 21:07:11.835391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.098 [2024-07-13 21:07:11.835511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.098 [2024-07-13 21:07:11.835555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:58.098 [2024-07-13 21:07:11.835571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:15:58.098 [2024-07-13 21:07:11.835583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.098 [2024-07-13 21:07:11.835658] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:58.098 [2024-07-13 21:07:11.836717] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:58.098 [2024-07-13 21:07:11.836776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.098 [2024-07-13 21:07:11.836792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:58.098 [2024-07-13 21:07:11.836807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:15:58.098 [2024-07-13 21:07:11.836820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.098 [2024-07-13 21:07:11.837025] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e029c6a7-4dec-490d-884a-9e83009cb442 00:15:58.098 [2024-07-13 21:07:11.838149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.098 [2024-07-13 21:07:11.838210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:58.098 [2024-07-13 21:07:11.838257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:58.098 [2024-07-13 21:07:11.838271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.098 [2024-07-13 21:07:11.842766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.098 [2024-07-13 21:07:11.842829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:58.098 [2024-07-13 21:07:11.842900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.384 ms 00:15:58.098 [2024-07-13 21:07:11.842920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.098 [2024-07-13 21:07:11.843031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.098 [2024-07-13 21:07:11.843053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:58.098 [2024-07-13 21:07:11.843067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:15:58.098 [2024-07-13 21:07:11.843082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.098 [2024-07-13 21:07:11.843203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.098 [2024-07-13 21:07:11.843226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:58.098 [2024-07-13 21:07:11.843239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:58.098 [2024-07-13 21:07:11.843254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.098 [2024-07-13 21:07:11.843310] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:58.098 [2024-07-13 21:07:11.847600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.098 [2024-07-13 21:07:11.847651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:58.098 [2024-07-13 21:07:11.847687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.307 ms 00:15:58.098 [2024-07-13 21:07:11.847701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.098 [2024-07-13 21:07:11.847791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.098 [2024-07-13 21:07:11.847808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:58.098 [2024-07-13 21:07:11.847824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:15:58.098 [2024-07-13 21:07:11.847836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.098 [2024-07-13 21:07:11.847907] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:58.098 [2024-07-13 21:07:11.848059] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:58.098 [2024-07-13 21:07:11.848092] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:58.098 [2024-07-13 21:07:11.848127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:58.098 [2024-07-13 21:07:11.848157] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:58.098 [2024-07-13 21:07:11.848172] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:58.098 [2024-07-13 21:07:11.848188] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:58.098 [2024-07-13 21:07:11.848204] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:58.098 [2024-07-13 21:07:11.848221] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:58.098 [2024-07-13 21:07:11.848232] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:58.098 [2024-07-13 21:07:11.848248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.098 [2024-07-13 21:07:11.848263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:58.098 [2024-07-13 21:07:11.848279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:15:58.098 [2024-07-13 21:07:11.848290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.098 [2024-07-13 21:07:11.848382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.098 [2024-07-13 21:07:11.848399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:58.098 [2024-07-13 21:07:11.848414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:15:58.098 [2024-07-13 21:07:11.848427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.098 [2024-07-13 21:07:11.848551] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:58.098 [2024-07-13 21:07:11.848569] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:58.098 [2024-07-13 21:07:11.848588] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:58.098 [2024-07-13 21:07:11.848600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:58.098 [2024-07-13 21:07:11.848615] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:58.098 [2024-07-13 21:07:11.848626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:58.098 [2024-07-13 21:07:11.848640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:58.098 [2024-07-13 21:07:11.848651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:58.098 [2024-07-13 21:07:11.848664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:58.098 [2024-07-13 21:07:11.848675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:58.098 [2024-07-13 21:07:11.848688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:58.098 [2024-07-13 21:07:11.848699] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:58.098 [2024-07-13 21:07:11.848713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:58.098 [2024-07-13 21:07:11.848726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:58.098 [2024-07-13 21:07:11.848739] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:58.098 [2024-07-13 21:07:11.848750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:58.098 [2024-07-13 21:07:11.848765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:58.098 [2024-07-13 21:07:11.848776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:58.098 [2024-07-13 21:07:11.848789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:58.098 [2024-07-13 21:07:11.848800] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:58.098 [2024-07-13 21:07:11.848813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:58.098 [2024-07-13 21:07:11.848828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:58.098 [2024-07-13 21:07:11.848858] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:58.098 [2024-07-13 21:07:11.848873] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:58.098 [2024-07-13 21:07:11.848886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:58.098 [2024-07-13 21:07:11.848897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:58.098 [2024-07-13 21:07:11.848910] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:58.098 [2024-07-13 21:07:11.848921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:58.098 [2024-07-13 21:07:11.848934] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:58.098 [2024-07-13 21:07:11.848945] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:58.098 [2024-07-13 21:07:11.848957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:58.098 [2024-07-13 21:07:11.848968] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:58.098 [2024-07-13 21:07:11.848983] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:58.098 [2024-07-13 21:07:11.848994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:58.098 [2024-07-13 21:07:11.849007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:58.098 [2024-07-13 21:07:11.849018] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:58.098 [2024-07-13 21:07:11.849031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:58.098 [2024-07-13 21:07:11.849042] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:58.098 [2024-07-13 21:07:11.849078] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:58.098 [2024-07-13 21:07:11.849089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:58.098 [2024-07-13 21:07:11.849102] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:58.098 [2024-07-13 21:07:11.849115] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:58.098 [2024-07-13 21:07:11.849128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:58.099 [2024-07-13 21:07:11.849140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:58.099 [2024-07-13 21:07:11.849154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:58.099 [2024-07-13 21:07:11.849166] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:58.099 [2024-07-13 21:07:11.849179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:58.099 [2024-07-13 21:07:11.849191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:58.099 [2024-07-13 21:07:11.849205] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:58.099 [2024-07-13 21:07:11.849217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:58.099 [2024-07-13 21:07:11.849231] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:58.099 [2024-07-13 21:07:11.849246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:58.099 [2024-07-13 21:07:11.849262] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:58.099 [2024-07-13 21:07:11.849281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:58.099 [2024-07-13 21:07:11.849295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:58.099 [2024-07-13 21:07:11.849307] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:58.099 [2024-07-13 21:07:11.849322] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:58.099 [2024-07-13 21:07:11.849335] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:58.099 [2024-07-13 21:07:11.849349] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:58.099 [2024-07-13 21:07:11.849361] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:58.099 [2024-07-13 21:07:11.849374] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:58.099 [2024-07-13 21:07:11.849386] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:58.099 [2024-07-13 21:07:11.849401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:58.099 [2024-07-13 21:07:11.849413] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:58.099 [2024-07-13 21:07:11.849432] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:58.099 [2024-07-13 21:07:11.849444] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:58.099 [2024-07-13 21:07:11.849460] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:58.099 [2024-07-13 21:07:11.849476] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:58.099 [2024-07-13 21:07:11.849491] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:58.099 [2024-07-13 21:07:11.849505] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:58.099 [2024-07-13 21:07:11.849519] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:58.099 [2024-07-13 21:07:11.849532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.099 [2024-07-13 21:07:11.849547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:58.099 [2024-07-13 21:07:11.849562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:15:58.099 [2024-07-13 21:07:11.849577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.099 [2024-07-13 21:07:11.866708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.099 [2024-07-13 21:07:11.866777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:58.099 [2024-07-13 21:07:11.866812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.043 ms 00:15:58.099 [2024-07-13 21:07:11.866826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.099 [2024-07-13 21:07:11.866945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.099 [2024-07-13 21:07:11.866971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:58.099 [2024-07-13 21:07:11.866984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:15:58.099 [2024-07-13 21:07:11.866997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.099 [2024-07-13 21:07:11.903655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.099 [2024-07-13 21:07:11.903729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:58.099 [2024-07-13 21:07:11.903766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.548 ms 00:15:58.099 [2024-07-13 21:07:11.903780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.099 [2024-07-13 21:07:11.903833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.099 [2024-07-13 21:07:11.903863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:58.099 [2024-07-13 21:07:11.903880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:58.099 [2024-07-13 21:07:11.903893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.099 [2024-07-13 21:07:11.904338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.099 [2024-07-13 21:07:11.904373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:58.099 [2024-07-13 21:07:11.904389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:15:58.099 [2024-07-13 21:07:11.904408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.099 [2024-07-13 21:07:11.904576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.099 [2024-07-13 21:07:11.904609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:58.099 [2024-07-13 21:07:11.904624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:15:58.099 [2024-07-13 21:07:11.904638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.099 [2024-07-13 21:07:11.928206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.099 [2024-07-13 21:07:11.928274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:58.099 [2024-07-13 21:07:11.928325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.532 ms 00:15:58.099 [2024-07-13 21:07:11.928345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.099 [2024-07-13 21:07:11.941798] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:58.099 [2024-07-13 21:07:11.955392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.099 [2024-07-13 21:07:11.955468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:58.099 [2024-07-13 21:07:11.955523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.874 ms 00:15:58.099 [2024-07-13 21:07:11.955540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.099 [2024-07-13 21:07:12.013784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.099 [2024-07-13 21:07:12.013897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:58.099 [2024-07-13 21:07:12.013922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.172 ms 00:15:58.099 [2024-07-13 21:07:12.013939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.099 [2024-07-13 21:07:12.014008] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:58.099 [2024-07-13 21:07:12.014030] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:01.394 [2024-07-13 21:07:14.921804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.394 [2024-07-13 21:07:14.921905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:01.394 [2024-07-13 21:07:14.921930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2907.814 ms 00:16:01.394 [2024-07-13 21:07:14.921943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.394 [2024-07-13 21:07:14.922258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.394 [2024-07-13 21:07:14.922290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:01.394 [2024-07-13 21:07:14.922309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:16:01.394 [2024-07-13 21:07:14.922322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.394 [2024-07-13 21:07:14.951750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.394 [2024-07-13 21:07:14.951810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:01.394 [2024-07-13 21:07:14.951830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.350 ms 00:16:01.394 [2024-07-13 21:07:14.951872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.394 [2024-07-13 21:07:14.980915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.394 [2024-07-13 21:07:14.980988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:01.394 [2024-07-13 21:07:14.981013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.987 ms 00:16:01.394 [2024-07-13 21:07:14.981025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.394 [2024-07-13 21:07:14.981468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.394 [2024-07-13 21:07:14.981501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:01.394 [2024-07-13 21:07:14.981520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:16:01.394 [2024-07-13 21:07:14.981537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.394 [2024-07-13 21:07:15.058449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.394 [2024-07-13 21:07:15.058525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:01.394 [2024-07-13 21:07:15.058548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.830 ms 00:16:01.394 [2024-07-13 21:07:15.058560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.394 [2024-07-13 21:07:15.090673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.394 [2024-07-13 21:07:15.090740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:01.394 [2024-07-13 21:07:15.090763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.048 ms 00:16:01.394 [2024-07-13 21:07:15.090776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.394 [2024-07-13 21:07:15.094722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.394 [2024-07-13 21:07:15.094763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:01.394 [2024-07-13 21:07:15.094785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.883 ms 00:16:01.394 [2024-07-13 21:07:15.094798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.394 [2024-07-13 21:07:15.126433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.394 [2024-07-13 21:07:15.126495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:01.394 [2024-07-13 21:07:15.126534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.533 ms 00:16:01.394 [2024-07-13 21:07:15.126547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.394 [2024-07-13 21:07:15.126626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.394 [2024-07-13 21:07:15.126648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:01.394 [2024-07-13 21:07:15.126664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:01.394 [2024-07-13 21:07:15.126677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.394 [2024-07-13 21:07:15.126826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.394 [2024-07-13 21:07:15.126883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:01.394 [2024-07-13 21:07:15.126903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:01.394 [2024-07-13 21:07:15.126916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.394 [2024-07-13 21:07:15.128137] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3292.335 ms, result 0 00:16:01.394 { 00:16:01.394 "name": "ftl0", 00:16:01.394 "uuid": "e029c6a7-4dec-490d-884a-9e83009cb442" 00:16:01.394 } 00:16:01.394 21:07:15 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:01.394 21:07:15 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:16:01.394 21:07:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:01.394 21:07:15 -- common/autotest_common.sh@889 -- # local i 00:16:01.394 21:07:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:01.394 21:07:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:01.394 21:07:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:01.652 21:07:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:01.911 [ 00:16:01.911 { 00:16:01.911 "name": "ftl0", 00:16:01.911 "aliases": [ 00:16:01.911 "e029c6a7-4dec-490d-884a-9e83009cb442" 00:16:01.911 ], 00:16:01.911 "product_name": "FTL disk", 00:16:01.911 "block_size": 4096, 00:16:01.911 "num_blocks": 20971520, 00:16:01.911 "uuid": "e029c6a7-4dec-490d-884a-9e83009cb442", 00:16:01.911 "assigned_rate_limits": { 00:16:01.911 "rw_ios_per_sec": 0, 00:16:01.911 "rw_mbytes_per_sec": 0, 00:16:01.911 "r_mbytes_per_sec": 0, 00:16:01.911 "w_mbytes_per_sec": 0 00:16:01.911 }, 00:16:01.911 "claimed": false, 00:16:01.911 "zoned": false, 00:16:01.911 "supported_io_types": { 00:16:01.911 "read": true, 00:16:01.911 "write": true, 00:16:01.911 "unmap": true, 00:16:01.911 "write_zeroes": true, 00:16:01.911 "flush": true, 00:16:01.911 "reset": false, 00:16:01.911 "compare": false, 00:16:01.911 "compare_and_write": false, 00:16:01.911 "abort": false, 00:16:01.911 "nvme_admin": false, 00:16:01.911 "nvme_io": false 00:16:01.911 }, 00:16:01.911 "driver_specific": { 00:16:01.911 "ftl": { 00:16:01.911 "base_bdev": "f928e3ae-9dc5-4139-8384-f5e81c9e2895", 00:16:01.911 "cache": "nvc0n1p0" 00:16:01.911 } 00:16:01.911 } 00:16:01.911 } 00:16:01.911 ] 00:16:01.911 21:07:15 -- common/autotest_common.sh@895 -- # return 0 00:16:01.911 21:07:15 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:01.911 21:07:15 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:01.911 21:07:15 -- ftl/fio.sh@70 -- # echo ']}' 00:16:01.911 21:07:15 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:02.170 [2024-07-13 21:07:16.013040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.170 [2024-07-13 21:07:16.013137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:02.170 [2024-07-13 21:07:16.013159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:02.170 [2024-07-13 21:07:16.013189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.170 [2024-07-13 21:07:16.013233] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:02.170 [2024-07-13 21:07:16.016639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.170 [2024-07-13 21:07:16.016688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:02.170 [2024-07-13 21:07:16.016722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.377 ms 00:16:02.170 [2024-07-13 21:07:16.016734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.170 [2024-07-13 21:07:16.017285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.170 [2024-07-13 21:07:16.017322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:02.170 [2024-07-13 21:07:16.017355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:16:02.170 [2024-07-13 21:07:16.017368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.170 [2024-07-13 21:07:16.020612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.170 [2024-07-13 21:07:16.020657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:02.170 [2024-07-13 21:07:16.020691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:16:02.170 [2024-07-13 21:07:16.020703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.170 [2024-07-13 21:07:16.026875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.170 [2024-07-13 21:07:16.026938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:02.170 [2024-07-13 21:07:16.026974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.118 ms 00:16:02.170 [2024-07-13 21:07:16.026985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.170 [2024-07-13 21:07:16.057582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.170 [2024-07-13 21:07:16.057640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:02.170 [2024-07-13 21:07:16.057676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.501 ms 00:16:02.170 [2024-07-13 21:07:16.057688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.170 [2024-07-13 21:07:16.076905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.170 [2024-07-13 21:07:16.076989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:02.170 [2024-07-13 21:07:16.077028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.159 ms 00:16:02.170 [2024-07-13 21:07:16.077040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.170 [2024-07-13 21:07:16.077316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.170 [2024-07-13 21:07:16.077352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:02.170 [2024-07-13 21:07:16.077371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:16:02.170 [2024-07-13 21:07:16.077403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.430 [2024-07-13 21:07:16.107148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.430 [2024-07-13 21:07:16.107203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:02.430 [2024-07-13 21:07:16.107238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.699 ms 00:16:02.430 [2024-07-13 21:07:16.107250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.430 [2024-07-13 21:07:16.135694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.430 [2024-07-13 21:07:16.135749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:02.430 [2024-07-13 21:07:16.135784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.385 ms 00:16:02.430 [2024-07-13 21:07:16.135796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.430 [2024-07-13 21:07:16.164848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.430 [2024-07-13 21:07:16.164911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:02.430 [2024-07-13 21:07:16.164948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.968 ms 00:16:02.430 [2024-07-13 21:07:16.164960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.430 [2024-07-13 21:07:16.195056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.430 [2024-07-13 21:07:16.195115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:02.430 [2024-07-13 21:07:16.195152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.960 ms 00:16:02.430 [2024-07-13 21:07:16.195164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.430 [2024-07-13 21:07:16.195254] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:02.430 [2024-07-13 21:07:16.195278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.195992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:02.430 [2024-07-13 21:07:16.196290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:02.431 [2024-07-13 21:07:16.196741] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:02.431 [2024-07-13 21:07:16.196755] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e029c6a7-4dec-490d-884a-9e83009cb442 00:16:02.431 [2024-07-13 21:07:16.196768] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:02.431 [2024-07-13 21:07:16.196781] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:02.431 [2024-07-13 21:07:16.196793] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:02.431 [2024-07-13 21:07:16.196807] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:02.431 [2024-07-13 21:07:16.196818] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:02.431 [2024-07-13 21:07:16.196831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:02.431 [2024-07-13 21:07:16.196857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:02.431 [2024-07-13 21:07:16.196872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:02.431 [2024-07-13 21:07:16.196883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:02.431 [2024-07-13 21:07:16.196899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.431 [2024-07-13 21:07:16.196911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:02.431 [2024-07-13 21:07:16.196926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:16:02.431 [2024-07-13 21:07:16.196941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.431 [2024-07-13 21:07:16.213324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.431 [2024-07-13 21:07:16.213378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:02.431 [2024-07-13 21:07:16.213414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.304 ms 00:16:02.431 [2024-07-13 21:07:16.213426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.431 [2024-07-13 21:07:16.213712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.431 [2024-07-13 21:07:16.213737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:02.431 [2024-07-13 21:07:16.213758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:16:02.431 [2024-07-13 21:07:16.213770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.431 [2024-07-13 21:07:16.268140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.431 [2024-07-13 21:07:16.268208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:02.431 [2024-07-13 21:07:16.268245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.431 [2024-07-13 21:07:16.268258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.431 [2024-07-13 21:07:16.268359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.431 [2024-07-13 21:07:16.268375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:02.431 [2024-07-13 21:07:16.268409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.431 [2024-07-13 21:07:16.268429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.431 [2024-07-13 21:07:16.268592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.431 [2024-07-13 21:07:16.268613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:02.431 [2024-07-13 21:07:16.268629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.431 [2024-07-13 21:07:16.268641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.431 [2024-07-13 21:07:16.268680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.431 [2024-07-13 21:07:16.268695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:02.431 [2024-07-13 21:07:16.268709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.431 [2024-07-13 21:07:16.268724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.691 [2024-07-13 21:07:16.378957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.691 [2024-07-13 21:07:16.379039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:02.691 [2024-07-13 21:07:16.379078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.691 [2024-07-13 21:07:16.379090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.691 [2024-07-13 21:07:16.416039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.691 [2024-07-13 21:07:16.416090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:02.691 [2024-07-13 21:07:16.416116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.691 [2024-07-13 21:07:16.416129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.691 [2024-07-13 21:07:16.416238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.691 [2024-07-13 21:07:16.416258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:02.691 [2024-07-13 21:07:16.416274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.691 [2024-07-13 21:07:16.416287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.691 [2024-07-13 21:07:16.416402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.691 [2024-07-13 21:07:16.416420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:02.691 [2024-07-13 21:07:16.416436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.691 [2024-07-13 21:07:16.416449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.691 [2024-07-13 21:07:16.416599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.691 [2024-07-13 21:07:16.416629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:02.691 [2024-07-13 21:07:16.416646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.691 [2024-07-13 21:07:16.416659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.691 [2024-07-13 21:07:16.416739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.691 [2024-07-13 21:07:16.416757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:02.691 [2024-07-13 21:07:16.416772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.691 [2024-07-13 21:07:16.416785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.691 [2024-07-13 21:07:16.416870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.691 [2024-07-13 21:07:16.416900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:02.691 [2024-07-13 21:07:16.416917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.691 [2024-07-13 21:07:16.416930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.691 [2024-07-13 21:07:16.417001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.691 [2024-07-13 21:07:16.417019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:02.691 [2024-07-13 21:07:16.417035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.691 [2024-07-13 21:07:16.417048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.691 [2024-07-13 21:07:16.417245] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 404.166 ms, result 0 00:16:02.691 true 00:16:02.691 21:07:16 -- ftl/fio.sh@75 -- # killprocess 71267 00:16:02.691 21:07:16 -- common/autotest_common.sh@926 -- # '[' -z 71267 ']' 00:16:02.691 21:07:16 -- common/autotest_common.sh@930 -- # kill -0 71267 00:16:02.691 21:07:16 -- common/autotest_common.sh@931 -- # uname 00:16:02.691 21:07:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:02.691 21:07:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71267 00:16:02.691 21:07:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:02.691 21:07:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:02.691 killing process with pid 71267 00:16:02.691 21:07:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71267' 00:16:02.691 21:07:16 -- common/autotest_common.sh@945 -- # kill 71267 00:16:02.691 21:07:16 -- common/autotest_common.sh@950 -- # wait 71267 00:16:06.902 21:07:20 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:06.902 21:07:20 -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:06.902 21:07:20 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:06.902 21:07:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:06.902 21:07:20 -- common/autotest_common.sh@10 -- # set +x 00:16:06.902 21:07:20 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:06.902 21:07:20 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:06.902 21:07:20 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:16:06.902 21:07:20 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:06.902 21:07:20 -- common/autotest_common.sh@1318 -- # local sanitizers 00:16:06.902 21:07:20 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.902 21:07:20 -- common/autotest_common.sh@1320 -- # shift 00:16:06.902 21:07:20 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:16:06.902 21:07:20 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:06.902 21:07:20 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:06.902 21:07:20 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.902 21:07:20 -- common/autotest_common.sh@1324 -- # grep libasan 00:16:06.902 21:07:20 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:06.902 21:07:20 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:06.902 21:07:20 -- common/autotest_common.sh@1326 -- # break 00:16:06.902 21:07:20 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:06.902 21:07:20 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:06.902 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:06.902 fio-3.35 00:16:06.902 Starting 1 thread 00:16:12.182 00:16:12.182 test: (groupid=0, jobs=1): err= 0: pid=71483: Sat Jul 13 21:07:25 2024 00:16:12.182 read: IOPS=924, BW=61.4MiB/s (64.4MB/s)(255MiB/4144msec) 00:16:12.182 slat (nsec): min=5115, max=35138, avg=6998.86, stdev=3265.43 00:16:12.182 clat (usec): min=328, max=1292, avg=483.34, stdev=54.36 00:16:12.182 lat (usec): min=333, max=1306, avg=490.34, stdev=55.02 00:16:12.182 clat percentiles (usec): 00:16:12.182 | 1.00th=[ 375], 5.00th=[ 416], 10.00th=[ 429], 20.00th=[ 441], 00:16:12.182 | 30.00th=[ 453], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 486], 00:16:12.182 | 70.00th=[ 502], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 586], 00:16:12.182 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 898], 99.95th=[ 971], 00:16:12.182 | 99.99th=[ 1287] 00:16:12.182 write: IOPS=931, BW=61.9MiB/s (64.9MB/s)(256MiB/4139msec); 0 zone resets 00:16:12.182 slat (nsec): min=18172, max=78694, avg=24413.55, stdev=6660.97 00:16:12.182 clat (usec): min=378, max=1036, avg=548.98, stdev=64.56 00:16:12.182 lat (usec): min=399, max=1063, avg=573.39, stdev=65.14 00:16:12.182 clat percentiles (usec): 00:16:12.182 | 1.00th=[ 437], 5.00th=[ 461], 10.00th=[ 478], 20.00th=[ 506], 00:16:12.182 | 30.00th=[ 519], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 553], 00:16:12.182 | 70.00th=[ 570], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 644], 00:16:12.182 | 99.00th=[ 832], 99.50th=[ 889], 99.90th=[ 963], 99.95th=[ 1012], 00:16:12.182 | 99.99th=[ 1037] 00:16:12.182 bw ( KiB/s): min=61064, max=64736, per=99.99%, avg=63342.00, stdev=1158.00, samples=8 00:16:12.182 iops : min= 898, max= 952, avg=931.50, stdev=17.03, samples=8 00:16:12.182 lat (usec) : 500=43.36%, 750=55.79%, 1000=0.81% 00:16:12.182 lat (msec) : 2=0.04% 00:16:12.182 cpu : usr=99.30%, sys=0.12%, ctx=9, majf=0, minf=1318 00:16:12.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.182 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.182 00:16:12.182 Run status group 0 (all jobs): 00:16:12.182 READ: bw=61.4MiB/s (64.4MB/s), 61.4MiB/s-61.4MiB/s (64.4MB/s-64.4MB/s), io=255MiB (267MB), run=4144-4144msec 00:16:12.182 WRITE: bw=61.9MiB/s (64.9MB/s), 61.9MiB/s-61.9MiB/s (64.9MB/s-64.9MB/s), io=256MiB (269MB), run=4139-4139msec 00:16:13.119 ----------------------------------------------------- 00:16:13.119 Suppressions used: 00:16:13.119 count bytes template 00:16:13.119 1 5 /usr/src/fio/parse.c 00:16:13.119 1 8 libtcmalloc_minimal.so 00:16:13.119 1 904 libcrypto.so 00:16:13.119 ----------------------------------------------------- 00:16:13.119 00:16:13.119 21:07:27 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:13.119 21:07:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:13.119 21:07:27 -- common/autotest_common.sh@10 -- # set +x 00:16:13.377 21:07:27 -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:13.377 21:07:27 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:13.377 21:07:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:13.377 21:07:27 -- common/autotest_common.sh@10 -- # set +x 00:16:13.377 21:07:27 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:13.377 21:07:27 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:13.377 21:07:27 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:16:13.377 21:07:27 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:13.377 21:07:27 -- common/autotest_common.sh@1318 -- # local sanitizers 00:16:13.377 21:07:27 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:13.377 21:07:27 -- common/autotest_common.sh@1320 -- # shift 00:16:13.377 21:07:27 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:16:13.377 21:07:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:13.377 21:07:27 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:13.377 21:07:27 -- common/autotest_common.sh@1324 -- # grep libasan 00:16:13.377 21:07:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:13.377 21:07:27 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:13.377 21:07:27 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:13.377 21:07:27 -- common/autotest_common.sh@1326 -- # break 00:16:13.377 21:07:27 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:13.377 21:07:27 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:13.377 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:13.378 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:13.378 fio-3.35 00:16:13.378 Starting 2 threads 00:16:45.466 00:16:45.466 first_half: (groupid=0, jobs=1): err= 0: pid=71587: Sat Jul 13 21:07:56 2024 00:16:45.466 read: IOPS=2377, BW=9512KiB/s (9740kB/s)(255MiB/27465msec) 00:16:45.466 slat (nsec): min=4412, max=85050, avg=7215.72, stdev=2001.78 00:16:45.466 clat (usec): min=893, max=294831, avg=43004.88, stdev=20108.02 00:16:45.466 lat (usec): min=901, max=294838, avg=43012.09, stdev=20108.19 00:16:45.466 clat percentiles (msec): 00:16:45.466 | 1.00th=[ 21], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 38], 00:16:45.466 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:16:45.466 | 70.00th=[ 40], 80.00th=[ 44], 90.00th=[ 48], 95.00th=[ 61], 00:16:45.466 | 99.00th=[ 155], 99.50th=[ 180], 99.90th=[ 211], 99.95th=[ 247], 00:16:45.466 | 99.99th=[ 288] 00:16:45.466 write: IOPS=2873, BW=11.2MiB/s (11.8MB/s)(256MiB/22804msec); 0 zone resets 00:16:45.466 slat (usec): min=5, max=648, avg= 8.94, stdev= 6.13 00:16:45.466 clat (usec): min=449, max=108219, avg=10755.03, stdev=18713.81 00:16:45.466 lat (usec): min=484, max=108227, avg=10763.97, stdev=18714.03 00:16:45.466 clat percentiles (usec): 00:16:45.466 | 1.00th=[ 1037], 5.00th=[ 1336], 10.00th=[ 1565], 20.00th=[ 2376], 00:16:45.466 | 30.00th=[ 3687], 40.00th=[ 5211], 50.00th=[ 5997], 60.00th=[ 6915], 00:16:45.466 | 70.00th=[ 7701], 80.00th=[ 10945], 90.00th=[ 15008], 95.00th=[ 73925], 00:16:45.466 | 99.00th=[ 93848], 99.50th=[ 96994], 99.90th=[101188], 99.95th=[102237], 00:16:45.466 | 99.99th=[106431] 00:16:45.466 bw ( KiB/s): min= 2600, max=39232, per=100.00%, avg=20971.52, stdev=11843.51, samples=25 00:16:45.466 iops : min= 650, max= 9808, avg=5242.88, stdev=2960.88, samples=25 00:16:45.466 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.34% 00:16:45.466 lat (msec) : 2=8.18%, 4=7.93%, 10=23.16%, 20=7.53%, 50=46.15% 00:16:45.466 lat (msec) : 100=5.15%, 250=1.48%, 500=0.02% 00:16:45.466 cpu : usr=99.13%, sys=0.25%, ctx=69, majf=0, minf=5561 00:16:45.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:45.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.466 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:45.466 issued rwts: total=65310,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:45.466 second_half: (groupid=0, jobs=1): err= 0: pid=71588: Sat Jul 13 21:07:56 2024 00:16:45.466 read: IOPS=2359, BW=9438KiB/s (9665kB/s)(255MiB/27679msec) 00:16:45.466 slat (nsec): min=4369, max=96966, avg=7180.11, stdev=1998.65 00:16:45.466 clat (usec): min=870, max=309490, avg=42273.84, stdev=22870.42 00:16:45.466 lat (usec): min=877, max=309498, avg=42281.02, stdev=22870.65 00:16:45.466 clat percentiles (msec): 00:16:45.466 | 1.00th=[ 11], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 37], 00:16:45.466 | 30.00th=[ 38], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 39], 00:16:45.466 | 70.00th=[ 40], 80.00th=[ 43], 90.00th=[ 46], 95.00th=[ 61], 00:16:45.466 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 230], 99.95th=[ 279], 00:16:45.466 | 99.99th=[ 305] 00:16:45.466 write: IOPS=2611, BW=10.2MiB/s (10.7MB/s)(256MiB/25097msec); 0 zone resets 00:16:45.466 slat (usec): min=5, max=249, avg= 9.13, stdev= 5.08 00:16:45.466 clat (usec): min=471, max=108655, avg=11905.98, stdev=20030.22 00:16:45.466 lat (usec): min=486, max=108665, avg=11915.10, stdev=20030.41 00:16:45.466 clat percentiles (usec): 00:16:45.466 | 1.00th=[ 947], 5.00th=[ 1237], 10.00th=[ 1401], 20.00th=[ 1680], 00:16:45.466 | 30.00th=[ 2278], 40.00th=[ 3982], 50.00th=[ 5473], 60.00th=[ 6718], 00:16:45.466 | 70.00th=[ 8225], 80.00th=[ 12780], 90.00th=[ 35390], 95.00th=[ 74974], 00:16:45.466 | 99.00th=[ 93848], 99.50th=[ 96994], 99.90th=[102237], 99.95th=[106431], 00:16:45.466 | 99.99th=[107480] 00:16:45.466 bw ( KiB/s): min= 704, max=50704, per=96.52%, avg=20164.92, stdev=14313.50, samples=26 00:16:45.466 iops : min= 176, max=12676, avg=5041.23, stdev=3578.37, samples=26 00:16:45.466 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.63% 00:16:45.466 lat (msec) : 2=13.02%, 4=6.52%, 10=17.41%, 20=8.62%, 50=47.64% 00:16:45.467 lat (msec) : 100=4.46%, 250=1.58%, 500=0.04% 00:16:45.467 cpu : usr=99.13%, sys=0.31%, ctx=47, majf=0, minf=5554 00:16:45.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:45.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.467 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:45.467 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:45.467 00:16:45.467 Run status group 0 (all jobs): 00:16:45.467 READ: bw=18.4MiB/s (19.3MB/s), 9438KiB/s-9512KiB/s (9665kB/s-9740kB/s), io=510MiB (535MB), run=27465-27679msec 00:16:45.467 WRITE: bw=20.4MiB/s (21.4MB/s), 10.2MiB/s-11.2MiB/s (10.7MB/s-11.8MB/s), io=512MiB (537MB), run=22804-25097msec 00:16:45.467 ----------------------------------------------------- 00:16:45.467 Suppressions used: 00:16:45.467 count bytes template 00:16:45.467 2 10 /usr/src/fio/parse.c 00:16:45.467 4 384 /usr/src/fio/iolog.c 00:16:45.467 1 8 libtcmalloc_minimal.so 00:16:45.467 1 904 libcrypto.so 00:16:45.467 ----------------------------------------------------- 00:16:45.467 00:16:45.467 21:07:58 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:45.467 21:07:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:45.467 21:07:58 -- common/autotest_common.sh@10 -- # set +x 00:16:45.467 21:07:58 -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:45.467 21:07:58 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:45.467 21:07:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:45.467 21:07:58 -- common/autotest_common.sh@10 -- # set +x 00:16:45.467 21:07:58 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:45.467 21:07:58 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:45.467 21:07:58 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:16:45.467 21:07:58 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:45.467 21:07:58 -- common/autotest_common.sh@1318 -- # local sanitizers 00:16:45.467 21:07:58 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:45.467 21:07:58 -- common/autotest_common.sh@1320 -- # shift 00:16:45.467 21:07:58 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:16:45.467 21:07:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:45.467 21:07:58 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:45.467 21:07:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:45.467 21:07:58 -- common/autotest_common.sh@1324 -- # grep libasan 00:16:45.467 21:07:58 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:45.467 21:07:58 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:45.467 21:07:58 -- common/autotest_common.sh@1326 -- # break 00:16:45.467 21:07:58 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:45.467 21:07:58 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:45.467 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:45.467 fio-3.35 00:16:45.467 Starting 1 thread 00:17:03.550 00:17:03.550 test: (groupid=0, jobs=1): err= 0: pid=71939: Sat Jul 13 21:08:15 2024 00:17:03.550 read: IOPS=6467, BW=25.3MiB/s (26.5MB/s)(255MiB/10082msec) 00:17:03.550 slat (nsec): min=4399, max=38420, avg=6407.80, stdev=2069.34 00:17:03.550 clat (usec): min=782, max=39009, avg=19781.05, stdev=1169.29 00:17:03.550 lat (usec): min=787, max=39016, avg=19787.45, stdev=1169.30 00:17:03.550 clat percentiles (usec): 00:17:03.550 | 1.00th=[18482], 5.00th=[18744], 10.00th=[19006], 20.00th=[19006], 00:17:03.550 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:17:03.550 | 70.00th=[20055], 80.00th=[20317], 90.00th=[20579], 95.00th=[21627], 00:17:03.550 | 99.00th=[23200], 99.50th=[25297], 99.90th=[29230], 99.95th=[34341], 00:17:03.550 | 99.99th=[38011] 00:17:03.550 write: IOPS=12.0k, BW=47.0MiB/s (49.2MB/s)(256MiB/5451msec); 0 zone resets 00:17:03.550 slat (usec): min=5, max=483, avg= 8.84, stdev= 5.22 00:17:03.550 clat (usec): min=629, max=60484, avg=10583.44, stdev=13434.84 00:17:03.550 lat (usec): min=636, max=60493, avg=10592.28, stdev=13434.87 00:17:03.550 clat percentiles (usec): 00:17:03.550 | 1.00th=[ 963], 5.00th=[ 1156], 10.00th=[ 1287], 20.00th=[ 1467], 00:17:03.550 | 30.00th=[ 1663], 40.00th=[ 2147], 50.00th=[ 6915], 60.00th=[ 7832], 00:17:03.550 | 70.00th=[ 8979], 80.00th=[10683], 90.00th=[39060], 95.00th=[41681], 00:17:03.550 | 99.00th=[45876], 99.50th=[47973], 99.90th=[50594], 99.95th=[51643], 00:17:03.550 | 99.99th=[56361] 00:17:03.550 bw ( KiB/s): min=38528, max=67648, per=99.11%, avg=47662.55, stdev=10345.36, samples=11 00:17:03.550 iops : min= 9632, max=16912, avg=11915.64, stdev=2586.34, samples=11 00:17:03.550 lat (usec) : 750=0.03%, 1000=0.74% 00:17:03.550 lat (msec) : 2=18.75%, 4=1.38%, 10=17.95%, 20=38.06%, 50=23.01% 00:17:03.550 lat (msec) : 100=0.10% 00:17:03.550 cpu : usr=98.71%, sys=0.67%, ctx=31, majf=0, minf=5567 00:17:03.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:03.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.550 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:03.550 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:03.550 00:17:03.550 Run status group 0 (all jobs): 00:17:03.550 READ: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=255MiB (267MB), run=10082-10082msec 00:17:03.550 WRITE: bw=47.0MiB/s (49.2MB/s), 47.0MiB/s-47.0MiB/s (49.2MB/s-49.2MB/s), io=256MiB (268MB), run=5451-5451msec 00:17:03.550 ----------------------------------------------------- 00:17:03.550 Suppressions used: 00:17:03.550 count bytes template 00:17:03.550 1 5 /usr/src/fio/parse.c 00:17:03.550 2 192 /usr/src/fio/iolog.c 00:17:03.550 1 8 libtcmalloc_minimal.so 00:17:03.550 1 904 libcrypto.so 00:17:03.550 ----------------------------------------------------- 00:17:03.550 00:17:03.550 21:08:16 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:03.550 21:08:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:03.550 21:08:16 -- common/autotest_common.sh@10 -- # set +x 00:17:03.550 21:08:16 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:03.550 21:08:16 -- ftl/fio.sh@85 -- # remove_shm 00:17:03.550 Remove shared memory files 00:17:03.550 21:08:16 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:03.550 21:08:16 -- ftl/common.sh@205 -- # rm -f rm -f 00:17:03.550 21:08:16 -- ftl/common.sh@206 -- # rm -f rm -f 00:17:03.550 21:08:16 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56396 /dev/shm/spdk_tgt_trace.pid70188 00:17:03.550 21:08:16 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:03.550 21:08:16 -- ftl/common.sh@209 -- # rm -f rm -f 00:17:03.550 00:17:03.550 real 1m9.875s 00:17:03.550 user 2m35.186s 00:17:03.550 sys 0m3.724s 00:17:03.550 21:08:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:03.550 21:08:16 -- common/autotest_common.sh@10 -- # set +x 00:17:03.550 ************************************ 00:17:03.550 END TEST ftl_fio_basic 00:17:03.550 ************************************ 00:17:03.550 21:08:16 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:17:03.550 21:08:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:03.550 21:08:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:03.550 21:08:16 -- common/autotest_common.sh@10 -- # set +x 00:17:03.550 ************************************ 00:17:03.551 START TEST ftl_bdevperf 00:17:03.551 ************************************ 00:17:03.551 21:08:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:17:03.551 * Looking for test storage... 00:17:03.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:03.551 21:08:16 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:03.551 21:08:16 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:03.551 21:08:16 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:03.551 21:08:16 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:03.551 21:08:16 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:03.551 21:08:16 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:03.551 21:08:16 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.551 21:08:16 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:03.551 21:08:16 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:03.551 21:08:16 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.551 21:08:16 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.551 21:08:16 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:03.551 21:08:16 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:03.551 21:08:16 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:03.551 21:08:16 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:03.551 21:08:16 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:03.551 21:08:16 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:03.551 21:08:16 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.551 21:08:16 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.551 21:08:16 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:03.551 21:08:16 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:03.551 21:08:16 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:03.551 21:08:16 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:03.551 21:08:16 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:03.551 21:08:16 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:03.551 21:08:16 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:03.551 21:08:16 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:03.551 21:08:16 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:03.551 21:08:16 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:03.551 21:08:16 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:17:03.551 21:08:16 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:17:03.551 21:08:16 -- ftl/bdevperf.sh@13 -- # use_append= 00:17:03.551 21:08:16 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.551 21:08:16 -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:03.551 21:08:16 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:03.551 21:08:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:03.551 21:08:16 -- common/autotest_common.sh@10 -- # set +x 00:17:03.551 21:08:16 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=72188 00:17:03.551 21:08:16 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:03.551 21:08:16 -- ftl/bdevperf.sh@22 -- # waitforlisten 72188 00:17:03.551 21:08:16 -- common/autotest_common.sh@819 -- # '[' -z 72188 ']' 00:17:03.551 21:08:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.551 21:08:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:03.551 21:08:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.551 21:08:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:03.551 21:08:16 -- common/autotest_common.sh@10 -- # set +x 00:17:03.551 21:08:16 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:03.551 [2024-07-13 21:08:17.043003] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:03.551 [2024-07-13 21:08:17.043189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72188 ] 00:17:03.551 [2024-07-13 21:08:17.215682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.551 [2024-07-13 21:08:17.406966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.119 21:08:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.120 21:08:17 -- common/autotest_common.sh@852 -- # return 0 00:17:04.120 21:08:17 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:04.120 21:08:17 -- ftl/common.sh@54 -- # local name=nvme0 00:17:04.120 21:08:17 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:04.120 21:08:17 -- ftl/common.sh@56 -- # local size=103424 00:17:04.120 21:08:17 -- ftl/common.sh@59 -- # local base_bdev 00:17:04.120 21:08:17 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:04.753 21:08:18 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:04.753 21:08:18 -- ftl/common.sh@62 -- # local base_size 00:17:04.753 21:08:18 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:04.753 21:08:18 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:17:04.753 21:08:18 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:04.753 21:08:18 -- common/autotest_common.sh@1359 -- # local bs 00:17:04.753 21:08:18 -- common/autotest_common.sh@1360 -- # local nb 00:17:04.753 21:08:18 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:04.753 21:08:18 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:04.753 { 00:17:04.753 "name": "nvme0n1", 00:17:04.753 "aliases": [ 00:17:04.753 "3cba0b14-4543-4e03-811c-c47252b4124d" 00:17:04.753 ], 00:17:04.753 "product_name": "NVMe disk", 00:17:04.753 "block_size": 4096, 00:17:04.753 "num_blocks": 1310720, 00:17:04.753 "uuid": "3cba0b14-4543-4e03-811c-c47252b4124d", 00:17:04.753 "assigned_rate_limits": { 00:17:04.753 "rw_ios_per_sec": 0, 00:17:04.753 "rw_mbytes_per_sec": 0, 00:17:04.753 "r_mbytes_per_sec": 0, 00:17:04.753 "w_mbytes_per_sec": 0 00:17:04.753 }, 00:17:04.753 "claimed": true, 00:17:04.753 "claim_type": "read_many_write_one", 00:17:04.753 "zoned": false, 00:17:04.753 "supported_io_types": { 00:17:04.753 "read": true, 00:17:04.753 "write": true, 00:17:04.753 "unmap": true, 00:17:04.753 "write_zeroes": true, 00:17:04.753 "flush": true, 00:17:04.753 "reset": true, 00:17:04.753 "compare": true, 00:17:04.753 "compare_and_write": false, 00:17:04.753 "abort": true, 00:17:04.753 "nvme_admin": true, 00:17:04.753 "nvme_io": true 00:17:04.753 }, 00:17:04.753 "driver_specific": { 00:17:04.753 "nvme": [ 00:17:04.753 { 00:17:04.753 "pci_address": "0000:00:07.0", 00:17:04.753 "trid": { 00:17:04.753 "trtype": "PCIe", 00:17:04.753 "traddr": "0000:00:07.0" 00:17:04.753 }, 00:17:04.753 "ctrlr_data": { 00:17:04.754 "cntlid": 0, 00:17:04.754 "vendor_id": "0x1b36", 00:17:04.754 "model_number": "QEMU NVMe Ctrl", 00:17:04.754 "serial_number": "12341", 00:17:04.754 "firmware_revision": "8.0.0", 00:17:04.754 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:04.754 "oacs": { 00:17:04.754 "security": 0, 00:17:04.754 "format": 1, 00:17:04.754 "firmware": 0, 00:17:04.754 "ns_manage": 1 00:17:04.754 }, 00:17:04.754 "multi_ctrlr": false, 00:17:04.754 "ana_reporting": false 00:17:04.754 }, 00:17:04.754 "vs": { 00:17:04.754 "nvme_version": "1.4" 00:17:04.754 }, 00:17:04.754 "ns_data": { 00:17:04.754 "id": 1, 00:17:04.754 "can_share": false 00:17:04.754 } 00:17:04.754 } 00:17:04.754 ], 00:17:04.754 "mp_policy": "active_passive" 00:17:04.754 } 00:17:04.754 } 00:17:04.754 ]' 00:17:04.754 21:08:18 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:04.754 21:08:18 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:04.754 21:08:18 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:04.754 21:08:18 -- common/autotest_common.sh@1363 -- # nb=1310720 00:17:04.754 21:08:18 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:17:04.754 21:08:18 -- common/autotest_common.sh@1367 -- # echo 5120 00:17:04.754 21:08:18 -- ftl/common.sh@63 -- # base_size=5120 00:17:04.754 21:08:18 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:04.754 21:08:18 -- ftl/common.sh@67 -- # clear_lvols 00:17:04.754 21:08:18 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:04.754 21:08:18 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:05.026 21:08:18 -- ftl/common.sh@28 -- # stores=eea5342c-2435-4c33-b71e-fc2292ccb52c 00:17:05.026 21:08:18 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:05.026 21:08:18 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eea5342c-2435-4c33-b71e-fc2292ccb52c 00:17:05.284 21:08:19 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:05.543 21:08:19 -- ftl/common.sh@68 -- # lvs=9352dad6-5940-476b-8b68-cdfa4f9297ea 00:17:05.543 21:08:19 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9352dad6-5940-476b-8b68-cdfa4f9297ea 00:17:05.801 21:08:19 -- ftl/bdevperf.sh@23 -- # split_bdev=39e147a9-16bf-4725-a445-e908116bf00b 00:17:05.801 21:08:19 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 39e147a9-16bf-4725-a445-e908116bf00b 00:17:05.801 21:08:19 -- ftl/common.sh@35 -- # local name=nvc0 00:17:05.801 21:08:19 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:05.801 21:08:19 -- ftl/common.sh@37 -- # local base_bdev=39e147a9-16bf-4725-a445-e908116bf00b 00:17:05.801 21:08:19 -- ftl/common.sh@38 -- # local cache_size= 00:17:05.801 21:08:19 -- ftl/common.sh@41 -- # get_bdev_size 39e147a9-16bf-4725-a445-e908116bf00b 00:17:05.801 21:08:19 -- common/autotest_common.sh@1357 -- # local bdev_name=39e147a9-16bf-4725-a445-e908116bf00b 00:17:05.801 21:08:19 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:05.801 21:08:19 -- common/autotest_common.sh@1359 -- # local bs 00:17:05.801 21:08:19 -- common/autotest_common.sh@1360 -- # local nb 00:17:05.801 21:08:19 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 39e147a9-16bf-4725-a445-e908116bf00b 00:17:06.059 21:08:19 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:06.059 { 00:17:06.059 "name": "39e147a9-16bf-4725-a445-e908116bf00b", 00:17:06.059 "aliases": [ 00:17:06.059 "lvs/nvme0n1p0" 00:17:06.059 ], 00:17:06.059 "product_name": "Logical Volume", 00:17:06.059 "block_size": 4096, 00:17:06.059 "num_blocks": 26476544, 00:17:06.059 "uuid": "39e147a9-16bf-4725-a445-e908116bf00b", 00:17:06.059 "assigned_rate_limits": { 00:17:06.059 "rw_ios_per_sec": 0, 00:17:06.059 "rw_mbytes_per_sec": 0, 00:17:06.059 "r_mbytes_per_sec": 0, 00:17:06.059 "w_mbytes_per_sec": 0 00:17:06.059 }, 00:17:06.059 "claimed": false, 00:17:06.059 "zoned": false, 00:17:06.059 "supported_io_types": { 00:17:06.059 "read": true, 00:17:06.059 "write": true, 00:17:06.059 "unmap": true, 00:17:06.059 "write_zeroes": true, 00:17:06.059 "flush": false, 00:17:06.059 "reset": true, 00:17:06.059 "compare": false, 00:17:06.059 "compare_and_write": false, 00:17:06.059 "abort": false, 00:17:06.059 "nvme_admin": false, 00:17:06.059 "nvme_io": false 00:17:06.059 }, 00:17:06.059 "driver_specific": { 00:17:06.059 "lvol": { 00:17:06.059 "lvol_store_uuid": "9352dad6-5940-476b-8b68-cdfa4f9297ea", 00:17:06.059 "base_bdev": "nvme0n1", 00:17:06.059 "thin_provision": true, 00:17:06.059 "snapshot": false, 00:17:06.059 "clone": false, 00:17:06.059 "esnap_clone": false 00:17:06.059 } 00:17:06.059 } 00:17:06.059 } 00:17:06.059 ]' 00:17:06.059 21:08:19 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:06.059 21:08:19 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:06.059 21:08:19 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:06.059 21:08:19 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:06.059 21:08:19 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:06.059 21:08:19 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:06.059 21:08:19 -- ftl/common.sh@41 -- # local base_size=5171 00:17:06.059 21:08:19 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:06.059 21:08:19 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:06.317 21:08:20 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:06.317 21:08:20 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:06.317 21:08:20 -- ftl/common.sh@48 -- # get_bdev_size 39e147a9-16bf-4725-a445-e908116bf00b 00:17:06.317 21:08:20 -- common/autotest_common.sh@1357 -- # local bdev_name=39e147a9-16bf-4725-a445-e908116bf00b 00:17:06.317 21:08:20 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:06.317 21:08:20 -- common/autotest_common.sh@1359 -- # local bs 00:17:06.317 21:08:20 -- common/autotest_common.sh@1360 -- # local nb 00:17:06.317 21:08:20 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 39e147a9-16bf-4725-a445-e908116bf00b 00:17:06.576 21:08:20 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:06.576 { 00:17:06.576 "name": "39e147a9-16bf-4725-a445-e908116bf00b", 00:17:06.576 "aliases": [ 00:17:06.576 "lvs/nvme0n1p0" 00:17:06.576 ], 00:17:06.576 "product_name": "Logical Volume", 00:17:06.576 "block_size": 4096, 00:17:06.576 "num_blocks": 26476544, 00:17:06.576 "uuid": "39e147a9-16bf-4725-a445-e908116bf00b", 00:17:06.576 "assigned_rate_limits": { 00:17:06.576 "rw_ios_per_sec": 0, 00:17:06.576 "rw_mbytes_per_sec": 0, 00:17:06.576 "r_mbytes_per_sec": 0, 00:17:06.576 "w_mbytes_per_sec": 0 00:17:06.576 }, 00:17:06.576 "claimed": false, 00:17:06.576 "zoned": false, 00:17:06.576 "supported_io_types": { 00:17:06.576 "read": true, 00:17:06.576 "write": true, 00:17:06.576 "unmap": true, 00:17:06.576 "write_zeroes": true, 00:17:06.576 "flush": false, 00:17:06.576 "reset": true, 00:17:06.576 "compare": false, 00:17:06.576 "compare_and_write": false, 00:17:06.576 "abort": false, 00:17:06.576 "nvme_admin": false, 00:17:06.576 "nvme_io": false 00:17:06.576 }, 00:17:06.576 "driver_specific": { 00:17:06.576 "lvol": { 00:17:06.576 "lvol_store_uuid": "9352dad6-5940-476b-8b68-cdfa4f9297ea", 00:17:06.576 "base_bdev": "nvme0n1", 00:17:06.576 "thin_provision": true, 00:17:06.576 "snapshot": false, 00:17:06.576 "clone": false, 00:17:06.576 "esnap_clone": false 00:17:06.576 } 00:17:06.576 } 00:17:06.577 } 00:17:06.577 ]' 00:17:06.577 21:08:20 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:06.577 21:08:20 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:06.577 21:08:20 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:06.836 21:08:20 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:06.836 21:08:20 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:06.836 21:08:20 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:06.836 21:08:20 -- ftl/common.sh@48 -- # cache_size=5171 00:17:06.836 21:08:20 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:06.836 21:08:20 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:17:06.836 21:08:20 -- ftl/bdevperf.sh@26 -- # get_bdev_size 39e147a9-16bf-4725-a445-e908116bf00b 00:17:06.836 21:08:20 -- common/autotest_common.sh@1357 -- # local bdev_name=39e147a9-16bf-4725-a445-e908116bf00b 00:17:06.836 21:08:20 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:06.836 21:08:20 -- common/autotest_common.sh@1359 -- # local bs 00:17:06.836 21:08:20 -- common/autotest_common.sh@1360 -- # local nb 00:17:06.836 21:08:20 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 39e147a9-16bf-4725-a445-e908116bf00b 00:17:07.095 21:08:20 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:07.095 { 00:17:07.095 "name": "39e147a9-16bf-4725-a445-e908116bf00b", 00:17:07.095 "aliases": [ 00:17:07.095 "lvs/nvme0n1p0" 00:17:07.095 ], 00:17:07.095 "product_name": "Logical Volume", 00:17:07.095 "block_size": 4096, 00:17:07.095 "num_blocks": 26476544, 00:17:07.095 "uuid": "39e147a9-16bf-4725-a445-e908116bf00b", 00:17:07.095 "assigned_rate_limits": { 00:17:07.095 "rw_ios_per_sec": 0, 00:17:07.095 "rw_mbytes_per_sec": 0, 00:17:07.095 "r_mbytes_per_sec": 0, 00:17:07.095 "w_mbytes_per_sec": 0 00:17:07.095 }, 00:17:07.095 "claimed": false, 00:17:07.095 "zoned": false, 00:17:07.095 "supported_io_types": { 00:17:07.095 "read": true, 00:17:07.095 "write": true, 00:17:07.095 "unmap": true, 00:17:07.095 "write_zeroes": true, 00:17:07.095 "flush": false, 00:17:07.095 "reset": true, 00:17:07.095 "compare": false, 00:17:07.095 "compare_and_write": false, 00:17:07.095 "abort": false, 00:17:07.095 "nvme_admin": false, 00:17:07.095 "nvme_io": false 00:17:07.095 }, 00:17:07.095 "driver_specific": { 00:17:07.095 "lvol": { 00:17:07.095 "lvol_store_uuid": "9352dad6-5940-476b-8b68-cdfa4f9297ea", 00:17:07.095 "base_bdev": "nvme0n1", 00:17:07.095 "thin_provision": true, 00:17:07.095 "snapshot": false, 00:17:07.095 "clone": false, 00:17:07.095 "esnap_clone": false 00:17:07.095 } 00:17:07.095 } 00:17:07.095 } 00:17:07.095 ]' 00:17:07.095 21:08:20 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:07.095 21:08:20 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:07.095 21:08:20 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:07.354 21:08:21 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:07.354 21:08:21 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:07.354 21:08:21 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:07.354 21:08:21 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:17:07.354 21:08:21 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 39e147a9-16bf-4725-a445-e908116bf00b -c nvc0n1p0 --l2p_dram_limit 20 00:17:07.613 [2024-07-13 21:08:21.285780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.613 [2024-07-13 21:08:21.285874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:07.613 [2024-07-13 21:08:21.285933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:07.613 [2024-07-13 21:08:21.285946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.613 [2024-07-13 21:08:21.286018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.613 [2024-07-13 21:08:21.286034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:07.613 [2024-07-13 21:08:21.286048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:07.613 [2024-07-13 21:08:21.286059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.613 [2024-07-13 21:08:21.286087] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:07.613 [2024-07-13 21:08:21.287120] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:07.613 [2024-07-13 21:08:21.287201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.613 [2024-07-13 21:08:21.287217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:07.613 [2024-07-13 21:08:21.287231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:17:07.613 [2024-07-13 21:08:21.287258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.613 [2024-07-13 21:08:21.287383] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1aaa715b-33b9-4fbc-acd5-747d60237db7 00:17:07.613 [2024-07-13 21:08:21.288605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.613 [2024-07-13 21:08:21.288644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:07.613 [2024-07-13 21:08:21.288676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:07.613 [2024-07-13 21:08:21.288689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.613 [2024-07-13 21:08:21.293128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.613 [2024-07-13 21:08:21.293170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:07.613 [2024-07-13 21:08:21.293203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.396 ms 00:17:07.613 [2024-07-13 21:08:21.293220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.613 [2024-07-13 21:08:21.293340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.613 [2024-07-13 21:08:21.293359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:07.613 [2024-07-13 21:08:21.293371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:07.613 [2024-07-13 21:08:21.293387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.613 [2024-07-13 21:08:21.293455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.613 [2024-07-13 21:08:21.293475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:07.613 [2024-07-13 21:08:21.293486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:07.613 [2024-07-13 21:08:21.293498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.613 [2024-07-13 21:08:21.293529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:07.613 [2024-07-13 21:08:21.297647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.613 [2024-07-13 21:08:21.297681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:07.613 [2024-07-13 21:08:21.297720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.126 ms 00:17:07.613 [2024-07-13 21:08:21.297731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.613 [2024-07-13 21:08:21.297770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.613 [2024-07-13 21:08:21.297783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:07.613 [2024-07-13 21:08:21.297796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:07.613 [2024-07-13 21:08:21.297806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.613 [2024-07-13 21:08:21.297889] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:07.613 [2024-07-13 21:08:21.298039] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:07.613 [2024-07-13 21:08:21.298064] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:07.613 [2024-07-13 21:08:21.298078] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:07.613 [2024-07-13 21:08:21.298094] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:07.613 [2024-07-13 21:08:21.298107] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:07.613 [2024-07-13 21:08:21.298121] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:07.613 [2024-07-13 21:08:21.298131] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:07.613 [2024-07-13 21:08:21.298145] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:07.613 [2024-07-13 21:08:21.298155] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:07.613 [2024-07-13 21:08:21.298171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.613 [2024-07-13 21:08:21.298182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:07.613 [2024-07-13 21:08:21.298195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:17:07.613 [2024-07-13 21:08:21.298205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.613 [2024-07-13 21:08:21.298338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.613 [2024-07-13 21:08:21.298352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:07.613 [2024-07-13 21:08:21.298366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:07.613 [2024-07-13 21:08:21.298377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.613 [2024-07-13 21:08:21.298456] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:07.614 [2024-07-13 21:08:21.298473] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:07.614 [2024-07-13 21:08:21.298487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:07.614 [2024-07-13 21:08:21.298499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.614 [2024-07-13 21:08:21.298512] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:07.614 [2024-07-13 21:08:21.298523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:07.614 [2024-07-13 21:08:21.298536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:07.614 [2024-07-13 21:08:21.298547] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:07.614 [2024-07-13 21:08:21.298570] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:07.614 [2024-07-13 21:08:21.298581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:07.614 [2024-07-13 21:08:21.298595] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:07.614 [2024-07-13 21:08:21.298606] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:07.614 [2024-07-13 21:08:21.298618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:07.614 [2024-07-13 21:08:21.298628] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:07.614 [2024-07-13 21:08:21.298641] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:07.614 [2024-07-13 21:08:21.298651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.614 [2024-07-13 21:08:21.298665] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:07.614 [2024-07-13 21:08:21.298676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:07.614 [2024-07-13 21:08:21.298688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.614 [2024-07-13 21:08:21.298699] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:07.614 [2024-07-13 21:08:21.298711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:07.614 [2024-07-13 21:08:21.298722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:07.614 [2024-07-13 21:08:21.298734] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:07.614 [2024-07-13 21:08:21.298745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:07.614 [2024-07-13 21:08:21.298759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:07.614 [2024-07-13 21:08:21.298770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:07.614 [2024-07-13 21:08:21.298782] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:07.614 [2024-07-13 21:08:21.298792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:07.614 [2024-07-13 21:08:21.298804] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:07.614 [2024-07-13 21:08:21.298815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:07.614 [2024-07-13 21:08:21.298826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:07.614 [2024-07-13 21:08:21.298837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:07.614 [2024-07-13 21:08:21.298867] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:07.614 [2024-07-13 21:08:21.298878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:07.614 [2024-07-13 21:08:21.298891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:07.614 [2024-07-13 21:08:21.298901] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:07.614 [2024-07-13 21:08:21.298916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:07.614 [2024-07-13 21:08:21.298927] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:07.614 [2024-07-13 21:08:21.298939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:07.614 [2024-07-13 21:08:21.298965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:07.614 [2024-07-13 21:08:21.298979] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:07.614 [2024-07-13 21:08:21.298991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:07.614 [2024-07-13 21:08:21.299003] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:07.614 [2024-07-13 21:08:21.299015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.614 [2024-07-13 21:08:21.299029] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:07.614 [2024-07-13 21:08:21.299040] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:07.614 [2024-07-13 21:08:21.299053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:07.614 [2024-07-13 21:08:21.299064] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:07.614 [2024-07-13 21:08:21.299078] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:07.614 [2024-07-13 21:08:21.299089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:07.614 [2024-07-13 21:08:21.299103] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:07.614 [2024-07-13 21:08:21.299117] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:07.614 [2024-07-13 21:08:21.299132] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:07.614 [2024-07-13 21:08:21.299144] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:07.614 [2024-07-13 21:08:21.299158] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:07.614 [2024-07-13 21:08:21.299169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:07.614 [2024-07-13 21:08:21.299184] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:07.614 [2024-07-13 21:08:21.299195] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:07.614 [2024-07-13 21:08:21.299208] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:07.614 [2024-07-13 21:08:21.299220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:07.614 [2024-07-13 21:08:21.299249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:07.614 [2024-07-13 21:08:21.299261] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:07.614 [2024-07-13 21:08:21.299274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:07.614 [2024-07-13 21:08:21.299285] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:07.614 [2024-07-13 21:08:21.299302] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:07.614 [2024-07-13 21:08:21.299313] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:07.614 [2024-07-13 21:08:21.299328] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:07.614 [2024-07-13 21:08:21.299343] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:07.614 [2024-07-13 21:08:21.299356] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:07.614 [2024-07-13 21:08:21.299368] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:07.614 [2024-07-13 21:08:21.299382] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:07.614 [2024-07-13 21:08:21.299394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.614 [2024-07-13 21:08:21.299407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:07.614 [2024-07-13 21:08:21.299419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:17:07.614 [2024-07-13 21:08:21.299433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.614 [2024-07-13 21:08:21.317067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.614 [2024-07-13 21:08:21.317130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:07.614 [2024-07-13 21:08:21.317147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.588 ms 00:17:07.614 [2024-07-13 21:08:21.317160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.614 [2024-07-13 21:08:21.317252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.614 [2024-07-13 21:08:21.317270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:07.614 [2024-07-13 21:08:21.317281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:07.614 [2024-07-13 21:08:21.317293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.614 [2024-07-13 21:08:21.364287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.614 [2024-07-13 21:08:21.364342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:07.614 [2024-07-13 21:08:21.364393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.935 ms 00:17:07.614 [2024-07-13 21:08:21.364422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.614 [2024-07-13 21:08:21.364482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.614 [2024-07-13 21:08:21.364499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:07.614 [2024-07-13 21:08:21.364511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:07.614 [2024-07-13 21:08:21.364526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.614 [2024-07-13 21:08:21.364897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.614 [2024-07-13 21:08:21.364940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:07.614 [2024-07-13 21:08:21.364955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:17:07.614 [2024-07-13 21:08:21.364967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.614 [2024-07-13 21:08:21.365092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.614 [2024-07-13 21:08:21.365112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:07.614 [2024-07-13 21:08:21.365124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:07.614 [2024-07-13 21:08:21.365136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.614 [2024-07-13 21:08:21.380835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.614 [2024-07-13 21:08:21.381112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:07.614 [2024-07-13 21:08:21.381146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.676 ms 00:17:07.614 [2024-07-13 21:08:21.381162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.614 [2024-07-13 21:08:21.393450] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:07.614 [2024-07-13 21:08:21.398303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.614 [2024-07-13 21:08:21.398485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:07.614 [2024-07-13 21:08:21.398645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.006 ms 00:17:07.614 [2024-07-13 21:08:21.398697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.614 [2024-07-13 21:08:21.465031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.615 [2024-07-13 21:08:21.465318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:07.615 [2024-07-13 21:08:21.465464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.133 ms 00:17:07.615 [2024-07-13 21:08:21.465515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.615 [2024-07-13 21:08:21.465601] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:07.615 [2024-07-13 21:08:21.465743] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:10.144 [2024-07-13 21:08:23.923722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.144 [2024-07-13 21:08:23.924074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:10.144 [2024-07-13 21:08:23.924237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2458.130 ms 00:17:10.144 [2024-07-13 21:08:23.924372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.144 [2024-07-13 21:08:23.924699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.144 [2024-07-13 21:08:23.924764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:10.144 [2024-07-13 21:08:23.924966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:10.144 [2024-07-13 21:08:23.924991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.144 [2024-07-13 21:08:23.953585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.144 [2024-07-13 21:08:23.953626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:10.144 [2024-07-13 21:08:23.953662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.519 ms 00:17:10.144 [2024-07-13 21:08:23.953674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.144 [2024-07-13 21:08:23.983100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.144 [2024-07-13 21:08:23.983139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:10.144 [2024-07-13 21:08:23.983178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.377 ms 00:17:10.144 [2024-07-13 21:08:23.983189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.144 [2024-07-13 21:08:23.983535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.144 [2024-07-13 21:08:23.983569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:10.144 [2024-07-13 21:08:23.983617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:10.144 [2024-07-13 21:08:23.983632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.144 [2024-07-13 21:08:24.057156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.144 [2024-07-13 21:08:24.057212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:10.144 [2024-07-13 21:08:24.057249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.460 ms 00:17:10.144 [2024-07-13 21:08:24.057261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.403 [2024-07-13 21:08:24.087378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.403 [2024-07-13 21:08:24.087438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:10.403 [2024-07-13 21:08:24.087477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.081 ms 00:17:10.403 [2024-07-13 21:08:24.087489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.403 [2024-07-13 21:08:24.089510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.403 [2024-07-13 21:08:24.089546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:10.403 [2024-07-13 21:08:24.089581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.990 ms 00:17:10.403 [2024-07-13 21:08:24.089592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.403 [2024-07-13 21:08:24.118872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.403 [2024-07-13 21:08:24.119002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:10.403 [2024-07-13 21:08:24.119025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.215 ms 00:17:10.403 [2024-07-13 21:08:24.119037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.403 [2024-07-13 21:08:24.119089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.403 [2024-07-13 21:08:24.119104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:10.403 [2024-07-13 21:08:24.119119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:10.403 [2024-07-13 21:08:24.119130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.403 [2024-07-13 21:08:24.119269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.403 [2024-07-13 21:08:24.119289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:10.403 [2024-07-13 21:08:24.119304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:10.403 [2024-07-13 21:08:24.119315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.403 [2024-07-13 21:08:24.120469] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2834.190 ms, result 0 00:17:10.403 { 00:17:10.403 "name": "ftl0", 00:17:10.403 "uuid": "1aaa715b-33b9-4fbc-acd5-747d60237db7" 00:17:10.403 } 00:17:10.403 21:08:24 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:10.403 21:08:24 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:17:10.403 21:08:24 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:17:10.662 21:08:24 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:10.662 [2024-07-13 21:08:24.528879] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:10.662 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:10.662 Zero copy mechanism will not be used. 00:17:10.662 Running I/O for 4 seconds... 00:17:14.852 00:17:14.852 Latency(us) 00:17:14.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.853 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:14.853 ftl0 : 4.00 1703.04 113.09 0.00 0.00 615.94 258.79 14417.92 00:17:14.853 =================================================================================================================== 00:17:14.853 Total : 1703.04 113.09 0.00 0.00 615.94 258.79 14417.92 00:17:14.853 0 00:17:14.853 [2024-07-13 21:08:28.538421] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:14.853 21:08:28 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:14.853 [2024-07-13 21:08:28.668712] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:14.853 Running I/O for 4 seconds... 00:17:19.040 00:17:19.040 Latency(us) 00:17:19.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.040 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.040 ftl0 : 4.02 7837.83 30.62 0.00 0.00 16284.27 314.65 32410.53 00:17:19.040 =================================================================================================================== 00:17:19.040 Total : 7837.83 30.62 0.00 0.00 16284.27 0.00 32410.53 00:17:19.040 0 00:17:19.040 [2024-07-13 21:08:32.700358] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:19.040 21:08:32 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:19.040 [2024-07-13 21:08:32.837926] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:19.040 Running I/O for 4 seconds... 00:17:23.229 00:17:23.229 Latency(us) 00:17:23.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.229 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:23.229 Verification LBA range: start 0x0 length 0x1400000 00:17:23.229 ftl0 : 4.01 8399.42 32.81 0.00 0.00 15197.30 242.04 20018.27 00:17:23.229 =================================================================================================================== 00:17:23.229 Total : 8399.42 32.81 0.00 0.00 15197.30 0.00 20018.27 00:17:23.229 [2024-07-13 21:08:36.864478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:23.229 0 00:17:23.229 21:08:36 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:23.229 [2024-07-13 21:08:37.129522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.229 [2024-07-13 21:08:37.129595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.229 [2024-07-13 21:08:37.129650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:23.229 [2024-07-13 21:08:37.129662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.229 [2024-07-13 21:08:37.129696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.229 [2024-07-13 21:08:37.133015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.229 [2024-07-13 21:08:37.133048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.229 [2024-07-13 21:08:37.133079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.297 ms 00:17:23.229 [2024-07-13 21:08:37.133097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.229 [2024-07-13 21:08:37.134762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.229 [2024-07-13 21:08:37.134822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.229 [2024-07-13 21:08:37.134902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.639 ms 00:17:23.229 [2024-07-13 21:08:37.134920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.488 [2024-07-13 21:08:37.311891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.488 [2024-07-13 21:08:37.311975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.488 [2024-07-13 21:08:37.311997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 176.936 ms 00:17:23.488 [2024-07-13 21:08:37.312011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.488 [2024-07-13 21:08:37.318048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.488 [2024-07-13 21:08:37.318084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:23.488 [2024-07-13 21:08:37.318114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.994 ms 00:17:23.488 [2024-07-13 21:08:37.318127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.488 [2024-07-13 21:08:37.345393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.488 [2024-07-13 21:08:37.345451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.488 [2024-07-13 21:08:37.345468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.206 ms 00:17:23.488 [2024-07-13 21:08:37.345483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.488 [2024-07-13 21:08:37.362256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.488 [2024-07-13 21:08:37.362298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.488 [2024-07-13 21:08:37.362331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.733 ms 00:17:23.488 [2024-07-13 21:08:37.362344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.488 [2024-07-13 21:08:37.362492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.488 [2024-07-13 21:08:37.362515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.488 [2024-07-13 21:08:37.362530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:17:23.488 [2024-07-13 21:08:37.362542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.488 [2024-07-13 21:08:37.393870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.488 [2024-07-13 21:08:37.393938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:23.488 [2024-07-13 21:08:37.393972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.308 ms 00:17:23.488 [2024-07-13 21:08:37.393985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.748 [2024-07-13 21:08:37.423359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.748 [2024-07-13 21:08:37.423417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:23.748 [2024-07-13 21:08:37.423434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.328 ms 00:17:23.748 [2024-07-13 21:08:37.423448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.748 [2024-07-13 21:08:37.451630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.748 [2024-07-13 21:08:37.451675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.748 [2024-07-13 21:08:37.451708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.142 ms 00:17:23.748 [2024-07-13 21:08:37.451720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.748 [2024-07-13 21:08:37.479934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.748 [2024-07-13 21:08:37.479993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.748 [2024-07-13 21:08:37.480009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.115 ms 00:17:23.748 [2024-07-13 21:08:37.480023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.748 [2024-07-13 21:08:37.480072] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.748 [2024-07-13 21:08:37.480119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.748 [2024-07-13 21:08:37.480135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.748 [2024-07-13 21:08:37.480150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.748 [2024-07-13 21:08:37.480162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.748 [2024-07-13 21:08:37.480176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.480877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.481331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.481485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.481553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.481611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.481755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.481815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.749 [2024-07-13 21:08:37.482591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.750 [2024-07-13 21:08:37.482614] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.750 [2024-07-13 21:08:37.482626] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1aaa715b-33b9-4fbc-acd5-747d60237db7 00:17:23.750 [2024-07-13 21:08:37.482642] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.750 [2024-07-13 21:08:37.482654] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.750 [2024-07-13 21:08:37.482666] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.750 [2024-07-13 21:08:37.482678] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.750 [2024-07-13 21:08:37.482690] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.750 [2024-07-13 21:08:37.482702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.750 [2024-07-13 21:08:37.482714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.750 [2024-07-13 21:08:37.482725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.750 [2024-07-13 21:08:37.482737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.750 [2024-07-13 21:08:37.482748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.750 [2024-07-13 21:08:37.482765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.750 [2024-07-13 21:08:37.482779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.687 ms 00:17:23.750 [2024-07-13 21:08:37.482792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.498177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.750 [2024-07-13 21:08:37.498233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.750 [2024-07-13 21:08:37.498250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.324 ms 00:17:23.750 [2024-07-13 21:08:37.498264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.498474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.750 [2024-07-13 21:08:37.498490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.750 [2024-07-13 21:08:37.498502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:17:23.750 [2024-07-13 21:08:37.498513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.546577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.750 [2024-07-13 21:08:37.546637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.750 [2024-07-13 21:08:37.546654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.750 [2024-07-13 21:08:37.546668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.546730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.750 [2024-07-13 21:08:37.546747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.750 [2024-07-13 21:08:37.546758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.750 [2024-07-13 21:08:37.546770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.546891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.750 [2024-07-13 21:08:37.546915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.750 [2024-07-13 21:08:37.546928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.750 [2024-07-13 21:08:37.546942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.546995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.750 [2024-07-13 21:08:37.547013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.750 [2024-07-13 21:08:37.547024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.750 [2024-07-13 21:08:37.547037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.630889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.750 [2024-07-13 21:08:37.630968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.750 [2024-07-13 21:08:37.631000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.750 [2024-07-13 21:08:37.631013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.664469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.750 [2024-07-13 21:08:37.664528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.750 [2024-07-13 21:08:37.664544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.750 [2024-07-13 21:08:37.664556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.664639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.750 [2024-07-13 21:08:37.664658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.750 [2024-07-13 21:08:37.664670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.750 [2024-07-13 21:08:37.664684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.664734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.750 [2024-07-13 21:08:37.664751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.750 [2024-07-13 21:08:37.664765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.750 [2024-07-13 21:08:37.664777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.664936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.750 [2024-07-13 21:08:37.664960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.750 [2024-07-13 21:08:37.664973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.750 [2024-07-13 21:08:37.664985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.665032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.750 [2024-07-13 21:08:37.665052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:23.750 [2024-07-13 21:08:37.665064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.750 [2024-07-13 21:08:37.665079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.665121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.750 [2024-07-13 21:08:37.665137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.750 [2024-07-13 21:08:37.665148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.750 [2024-07-13 21:08:37.665163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.665211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.750 [2024-07-13 21:08:37.665228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.750 [2024-07-13 21:08:37.665242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.750 [2024-07-13 21:08:37.665254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.750 [2024-07-13 21:08:37.665411] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 535.861 ms, result 0 00:17:24.009 true 00:17:24.009 21:08:37 -- ftl/bdevperf.sh@37 -- # killprocess 72188 00:17:24.009 21:08:37 -- common/autotest_common.sh@926 -- # '[' -z 72188 ']' 00:17:24.009 21:08:37 -- common/autotest_common.sh@930 -- # kill -0 72188 00:17:24.009 21:08:37 -- common/autotest_common.sh@931 -- # uname 00:17:24.009 21:08:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:24.009 21:08:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72188 00:17:24.009 killing process with pid 72188 00:17:24.009 Received shutdown signal, test time was about 4.000000 seconds 00:17:24.009 00:17:24.009 Latency(us) 00:17:24.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.009 =================================================================================================================== 00:17:24.009 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.009 21:08:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:24.009 21:08:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:24.009 21:08:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72188' 00:17:24.009 21:08:37 -- common/autotest_common.sh@945 -- # kill 72188 00:17:24.009 21:08:37 -- common/autotest_common.sh@950 -- # wait 72188 00:17:24.976 21:08:38 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:17:24.976 21:08:38 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:24.976 21:08:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:24.976 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:17:24.976 Remove shared memory files 00:17:24.976 21:08:38 -- ftl/bdevperf.sh@41 -- # remove_shm 00:17:24.976 21:08:38 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:24.976 21:08:38 -- ftl/common.sh@205 -- # rm -f rm -f 00:17:24.976 21:08:38 -- ftl/common.sh@206 -- # rm -f rm -f 00:17:24.976 21:08:38 -- ftl/common.sh@207 -- # rm -f rm -f 00:17:24.976 21:08:38 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:24.976 21:08:38 -- ftl/common.sh@209 -- # rm -f rm -f 00:17:24.976 ************************************ 00:17:24.976 END TEST ftl_bdevperf 00:17:24.976 ************************************ 00:17:24.976 00:17:24.976 real 0m21.871s 00:17:24.976 user 0m25.176s 00:17:24.976 sys 0m1.023s 00:17:24.976 21:08:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.976 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:17:24.976 21:08:38 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:17:24.976 21:08:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:24.976 21:08:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:24.976 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:17:24.976 ************************************ 00:17:24.976 START TEST ftl_trim 00:17:24.976 ************************************ 00:17:24.976 21:08:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:17:24.976 * Looking for test storage... 00:17:24.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.976 21:08:38 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:24.976 21:08:38 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:24.976 21:08:38 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.976 21:08:38 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.976 21:08:38 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:25.257 21:08:38 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:25.257 21:08:38 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.257 21:08:38 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:25.257 21:08:38 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:25.257 21:08:38 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.257 21:08:38 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.257 21:08:38 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:25.257 21:08:38 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:25.257 21:08:38 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.257 21:08:38 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.257 21:08:38 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:25.257 21:08:38 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:25.257 21:08:38 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.257 21:08:38 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.257 21:08:38 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:25.257 21:08:38 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:25.257 21:08:38 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.257 21:08:38 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.257 21:08:38 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.257 21:08:38 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.257 21:08:38 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:25.257 21:08:38 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:25.257 21:08:38 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.257 21:08:38 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.257 21:08:38 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.257 21:08:38 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:17:25.257 21:08:38 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:17:25.257 21:08:38 -- ftl/trim.sh@25 -- # timeout=240 00:17:25.257 21:08:38 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:25.257 21:08:38 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:25.257 21:08:38 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:25.257 21:08:38 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:25.257 21:08:38 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:25.257 21:08:38 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.257 21:08:38 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.257 21:08:38 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:25.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.257 21:08:38 -- ftl/trim.sh@40 -- # svcpid=72542 00:17:25.257 21:08:38 -- ftl/trim.sh@41 -- # waitforlisten 72542 00:17:25.257 21:08:38 -- common/autotest_common.sh@819 -- # '[' -z 72542 ']' 00:17:25.257 21:08:38 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:25.257 21:08:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.257 21:08:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:25.257 21:08:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.257 21:08:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:25.257 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:17:25.257 [2024-07-13 21:08:38.982470] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:25.257 [2024-07-13 21:08:38.982614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72542 ] 00:17:25.257 [2024-07-13 21:08:39.141946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:25.515 [2024-07-13 21:08:39.320233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:25.515 [2024-07-13 21:08:39.320937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.515 [2024-07-13 21:08:39.321058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.515 [2024-07-13 21:08:39.321072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.893 21:08:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:26.893 21:08:40 -- common/autotest_common.sh@852 -- # return 0 00:17:26.893 21:08:40 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:26.893 21:08:40 -- ftl/common.sh@54 -- # local name=nvme0 00:17:26.893 21:08:40 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:26.893 21:08:40 -- ftl/common.sh@56 -- # local size=103424 00:17:26.893 21:08:40 -- ftl/common.sh@59 -- # local base_bdev 00:17:26.893 21:08:40 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:27.152 21:08:40 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:27.152 21:08:40 -- ftl/common.sh@62 -- # local base_size 00:17:27.152 21:08:40 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:27.152 21:08:40 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:17:27.152 21:08:40 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:27.152 21:08:40 -- common/autotest_common.sh@1359 -- # local bs 00:17:27.152 21:08:40 -- common/autotest_common.sh@1360 -- # local nb 00:17:27.152 21:08:40 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:27.411 21:08:41 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:27.411 { 00:17:27.411 "name": "nvme0n1", 00:17:27.411 "aliases": [ 00:17:27.411 "f0c5a718-8b88-4721-969b-21f22d2b4363" 00:17:27.411 ], 00:17:27.411 "product_name": "NVMe disk", 00:17:27.411 "block_size": 4096, 00:17:27.411 "num_blocks": 1310720, 00:17:27.411 "uuid": "f0c5a718-8b88-4721-969b-21f22d2b4363", 00:17:27.411 "assigned_rate_limits": { 00:17:27.411 "rw_ios_per_sec": 0, 00:17:27.411 "rw_mbytes_per_sec": 0, 00:17:27.411 "r_mbytes_per_sec": 0, 00:17:27.411 "w_mbytes_per_sec": 0 00:17:27.411 }, 00:17:27.411 "claimed": true, 00:17:27.411 "claim_type": "read_many_write_one", 00:17:27.411 "zoned": false, 00:17:27.411 "supported_io_types": { 00:17:27.411 "read": true, 00:17:27.411 "write": true, 00:17:27.411 "unmap": true, 00:17:27.411 "write_zeroes": true, 00:17:27.411 "flush": true, 00:17:27.411 "reset": true, 00:17:27.411 "compare": true, 00:17:27.411 "compare_and_write": false, 00:17:27.411 "abort": true, 00:17:27.411 "nvme_admin": true, 00:17:27.411 "nvme_io": true 00:17:27.411 }, 00:17:27.411 "driver_specific": { 00:17:27.411 "nvme": [ 00:17:27.411 { 00:17:27.411 "pci_address": "0000:00:07.0", 00:17:27.411 "trid": { 00:17:27.411 "trtype": "PCIe", 00:17:27.411 "traddr": "0000:00:07.0" 00:17:27.411 }, 00:17:27.411 "ctrlr_data": { 00:17:27.411 "cntlid": 0, 00:17:27.411 "vendor_id": "0x1b36", 00:17:27.411 "model_number": "QEMU NVMe Ctrl", 00:17:27.411 "serial_number": "12341", 00:17:27.411 "firmware_revision": "8.0.0", 00:17:27.411 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:27.411 "oacs": { 00:17:27.411 "security": 0, 00:17:27.411 "format": 1, 00:17:27.411 "firmware": 0, 00:17:27.411 "ns_manage": 1 00:17:27.411 }, 00:17:27.411 "multi_ctrlr": false, 00:17:27.411 "ana_reporting": false 00:17:27.411 }, 00:17:27.411 "vs": { 00:17:27.411 "nvme_version": "1.4" 00:17:27.411 }, 00:17:27.411 "ns_data": { 00:17:27.411 "id": 1, 00:17:27.411 "can_share": false 00:17:27.411 } 00:17:27.411 } 00:17:27.411 ], 00:17:27.411 "mp_policy": "active_passive" 00:17:27.411 } 00:17:27.411 } 00:17:27.411 ]' 00:17:27.411 21:08:41 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:27.411 21:08:41 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:27.411 21:08:41 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:27.411 21:08:41 -- common/autotest_common.sh@1363 -- # nb=1310720 00:17:27.411 21:08:41 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:17:27.411 21:08:41 -- common/autotest_common.sh@1367 -- # echo 5120 00:17:27.411 21:08:41 -- ftl/common.sh@63 -- # base_size=5120 00:17:27.411 21:08:41 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:27.411 21:08:41 -- ftl/common.sh@67 -- # clear_lvols 00:17:27.411 21:08:41 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:27.411 21:08:41 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:27.670 21:08:41 -- ftl/common.sh@28 -- # stores=9352dad6-5940-476b-8b68-cdfa4f9297ea 00:17:27.670 21:08:41 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:27.670 21:08:41 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9352dad6-5940-476b-8b68-cdfa4f9297ea 00:17:27.928 21:08:41 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:28.186 21:08:42 -- ftl/common.sh@68 -- # lvs=d8281d7a-f3fd-4e6e-a596-dce698955776 00:17:28.186 21:08:42 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d8281d7a-f3fd-4e6e-a596-dce698955776 00:17:28.444 21:08:42 -- ftl/trim.sh@43 -- # split_bdev=fe2bee9e-3884-4acf-b459-e2ae55487f8c 00:17:28.444 21:08:42 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 fe2bee9e-3884-4acf-b459-e2ae55487f8c 00:17:28.444 21:08:42 -- ftl/common.sh@35 -- # local name=nvc0 00:17:28.444 21:08:42 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:28.444 21:08:42 -- ftl/common.sh@37 -- # local base_bdev=fe2bee9e-3884-4acf-b459-e2ae55487f8c 00:17:28.444 21:08:42 -- ftl/common.sh@38 -- # local cache_size= 00:17:28.444 21:08:42 -- ftl/common.sh@41 -- # get_bdev_size fe2bee9e-3884-4acf-b459-e2ae55487f8c 00:17:28.444 21:08:42 -- common/autotest_common.sh@1357 -- # local bdev_name=fe2bee9e-3884-4acf-b459-e2ae55487f8c 00:17:28.444 21:08:42 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:28.444 21:08:42 -- common/autotest_common.sh@1359 -- # local bs 00:17:28.444 21:08:42 -- common/autotest_common.sh@1360 -- # local nb 00:17:28.444 21:08:42 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe2bee9e-3884-4acf-b459-e2ae55487f8c 00:17:28.702 21:08:42 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:28.702 { 00:17:28.702 "name": "fe2bee9e-3884-4acf-b459-e2ae55487f8c", 00:17:28.702 "aliases": [ 00:17:28.702 "lvs/nvme0n1p0" 00:17:28.702 ], 00:17:28.702 "product_name": "Logical Volume", 00:17:28.702 "block_size": 4096, 00:17:28.702 "num_blocks": 26476544, 00:17:28.702 "uuid": "fe2bee9e-3884-4acf-b459-e2ae55487f8c", 00:17:28.702 "assigned_rate_limits": { 00:17:28.702 "rw_ios_per_sec": 0, 00:17:28.702 "rw_mbytes_per_sec": 0, 00:17:28.702 "r_mbytes_per_sec": 0, 00:17:28.702 "w_mbytes_per_sec": 0 00:17:28.702 }, 00:17:28.702 "claimed": false, 00:17:28.702 "zoned": false, 00:17:28.702 "supported_io_types": { 00:17:28.703 "read": true, 00:17:28.703 "write": true, 00:17:28.703 "unmap": true, 00:17:28.703 "write_zeroes": true, 00:17:28.703 "flush": false, 00:17:28.703 "reset": true, 00:17:28.703 "compare": false, 00:17:28.703 "compare_and_write": false, 00:17:28.703 "abort": false, 00:17:28.703 "nvme_admin": false, 00:17:28.703 "nvme_io": false 00:17:28.703 }, 00:17:28.703 "driver_specific": { 00:17:28.703 "lvol": { 00:17:28.703 "lvol_store_uuid": "d8281d7a-f3fd-4e6e-a596-dce698955776", 00:17:28.703 "base_bdev": "nvme0n1", 00:17:28.703 "thin_provision": true, 00:17:28.703 "snapshot": false, 00:17:28.703 "clone": false, 00:17:28.703 "esnap_clone": false 00:17:28.703 } 00:17:28.703 } 00:17:28.703 } 00:17:28.703 ]' 00:17:28.703 21:08:42 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:28.703 21:08:42 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:28.703 21:08:42 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:28.703 21:08:42 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:28.703 21:08:42 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:28.703 21:08:42 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:28.703 21:08:42 -- ftl/common.sh@41 -- # local base_size=5171 00:17:28.703 21:08:42 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:28.703 21:08:42 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:28.961 21:08:42 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:28.961 21:08:42 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:28.961 21:08:42 -- ftl/common.sh@48 -- # get_bdev_size fe2bee9e-3884-4acf-b459-e2ae55487f8c 00:17:28.961 21:08:42 -- common/autotest_common.sh@1357 -- # local bdev_name=fe2bee9e-3884-4acf-b459-e2ae55487f8c 00:17:28.961 21:08:42 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:28.961 21:08:42 -- common/autotest_common.sh@1359 -- # local bs 00:17:28.961 21:08:42 -- common/autotest_common.sh@1360 -- # local nb 00:17:28.961 21:08:42 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe2bee9e-3884-4acf-b459-e2ae55487f8c 00:17:29.220 21:08:43 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:29.220 { 00:17:29.220 "name": "fe2bee9e-3884-4acf-b459-e2ae55487f8c", 00:17:29.220 "aliases": [ 00:17:29.220 "lvs/nvme0n1p0" 00:17:29.220 ], 00:17:29.220 "product_name": "Logical Volume", 00:17:29.220 "block_size": 4096, 00:17:29.220 "num_blocks": 26476544, 00:17:29.220 "uuid": "fe2bee9e-3884-4acf-b459-e2ae55487f8c", 00:17:29.220 "assigned_rate_limits": { 00:17:29.220 "rw_ios_per_sec": 0, 00:17:29.220 "rw_mbytes_per_sec": 0, 00:17:29.220 "r_mbytes_per_sec": 0, 00:17:29.220 "w_mbytes_per_sec": 0 00:17:29.220 }, 00:17:29.220 "claimed": false, 00:17:29.220 "zoned": false, 00:17:29.220 "supported_io_types": { 00:17:29.220 "read": true, 00:17:29.220 "write": true, 00:17:29.220 "unmap": true, 00:17:29.220 "write_zeroes": true, 00:17:29.220 "flush": false, 00:17:29.220 "reset": true, 00:17:29.220 "compare": false, 00:17:29.220 "compare_and_write": false, 00:17:29.220 "abort": false, 00:17:29.220 "nvme_admin": false, 00:17:29.220 "nvme_io": false 00:17:29.220 }, 00:17:29.220 "driver_specific": { 00:17:29.220 "lvol": { 00:17:29.220 "lvol_store_uuid": "d8281d7a-f3fd-4e6e-a596-dce698955776", 00:17:29.220 "base_bdev": "nvme0n1", 00:17:29.220 "thin_provision": true, 00:17:29.220 "snapshot": false, 00:17:29.220 "clone": false, 00:17:29.220 "esnap_clone": false 00:17:29.220 } 00:17:29.220 } 00:17:29.220 } 00:17:29.220 ]' 00:17:29.220 21:08:43 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:29.220 21:08:43 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:29.220 21:08:43 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:29.479 21:08:43 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:29.479 21:08:43 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:29.479 21:08:43 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:29.479 21:08:43 -- ftl/common.sh@48 -- # cache_size=5171 00:17:29.479 21:08:43 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:29.479 21:08:43 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:29.479 21:08:43 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:29.479 21:08:43 -- ftl/trim.sh@47 -- # get_bdev_size fe2bee9e-3884-4acf-b459-e2ae55487f8c 00:17:29.479 21:08:43 -- common/autotest_common.sh@1357 -- # local bdev_name=fe2bee9e-3884-4acf-b459-e2ae55487f8c 00:17:29.479 21:08:43 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:29.479 21:08:43 -- common/autotest_common.sh@1359 -- # local bs 00:17:29.479 21:08:43 -- common/autotest_common.sh@1360 -- # local nb 00:17:29.479 21:08:43 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe2bee9e-3884-4acf-b459-e2ae55487f8c 00:17:29.737 21:08:43 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:29.737 { 00:17:29.737 "name": "fe2bee9e-3884-4acf-b459-e2ae55487f8c", 00:17:29.737 "aliases": [ 00:17:29.737 "lvs/nvme0n1p0" 00:17:29.737 ], 00:17:29.737 "product_name": "Logical Volume", 00:17:29.737 "block_size": 4096, 00:17:29.737 "num_blocks": 26476544, 00:17:29.737 "uuid": "fe2bee9e-3884-4acf-b459-e2ae55487f8c", 00:17:29.737 "assigned_rate_limits": { 00:17:29.737 "rw_ios_per_sec": 0, 00:17:29.737 "rw_mbytes_per_sec": 0, 00:17:29.737 "r_mbytes_per_sec": 0, 00:17:29.737 "w_mbytes_per_sec": 0 00:17:29.737 }, 00:17:29.737 "claimed": false, 00:17:29.737 "zoned": false, 00:17:29.737 "supported_io_types": { 00:17:29.737 "read": true, 00:17:29.737 "write": true, 00:17:29.737 "unmap": true, 00:17:29.737 "write_zeroes": true, 00:17:29.737 "flush": false, 00:17:29.737 "reset": true, 00:17:29.737 "compare": false, 00:17:29.737 "compare_and_write": false, 00:17:29.737 "abort": false, 00:17:29.737 "nvme_admin": false, 00:17:29.737 "nvme_io": false 00:17:29.737 }, 00:17:29.737 "driver_specific": { 00:17:29.737 "lvol": { 00:17:29.737 "lvol_store_uuid": "d8281d7a-f3fd-4e6e-a596-dce698955776", 00:17:29.737 "base_bdev": "nvme0n1", 00:17:29.737 "thin_provision": true, 00:17:29.737 "snapshot": false, 00:17:29.737 "clone": false, 00:17:29.737 "esnap_clone": false 00:17:29.737 } 00:17:29.737 } 00:17:29.737 } 00:17:29.737 ]' 00:17:29.737 21:08:43 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:29.995 21:08:43 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:29.995 21:08:43 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:29.995 21:08:43 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:29.995 21:08:43 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:29.995 21:08:43 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:29.995 21:08:43 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:29.995 21:08:43 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fe2bee9e-3884-4acf-b459-e2ae55487f8c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:30.255 [2024-07-13 21:08:43.938771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.255 [2024-07-13 21:08:43.938873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:30.255 [2024-07-13 21:08:43.938905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:30.255 [2024-07-13 21:08:43.938918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.255 [2024-07-13 21:08:43.942309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.255 [2024-07-13 21:08:43.942350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.255 [2024-07-13 21:08:43.942385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.351 ms 00:17:30.255 [2024-07-13 21:08:43.942397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.255 [2024-07-13 21:08:43.942556] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:30.255 [2024-07-13 21:08:43.943591] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:30.255 [2024-07-13 21:08:43.943670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.255 [2024-07-13 21:08:43.943702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.255 [2024-07-13 21:08:43.943717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:17:30.255 [2024-07-13 21:08:43.943728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.255 [2024-07-13 21:08:43.944160] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f37e351e-d87e-4dea-9055-a9b6d8866d11 00:17:30.255 [2024-07-13 21:08:43.945353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.255 [2024-07-13 21:08:43.945395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:30.255 [2024-07-13 21:08:43.945430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:30.255 [2024-07-13 21:08:43.945444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.255 [2024-07-13 21:08:43.950102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.255 [2024-07-13 21:08:43.950148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.255 [2024-07-13 21:08:43.950180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.565 ms 00:17:30.255 [2024-07-13 21:08:43.950194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.255 [2024-07-13 21:08:43.950379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.255 [2024-07-13 21:08:43.950404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.255 [2024-07-13 21:08:43.950418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:30.255 [2024-07-13 21:08:43.950435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.255 [2024-07-13 21:08:43.950481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.255 [2024-07-13 21:08:43.950498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:30.255 [2024-07-13 21:08:43.950514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:30.255 [2024-07-13 21:08:43.950527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.255 [2024-07-13 21:08:43.950572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:30.255 [2024-07-13 21:08:43.954988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.255 [2024-07-13 21:08:43.955022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.255 [2024-07-13 21:08:43.955056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.423 ms 00:17:30.255 [2024-07-13 21:08:43.955067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.255 [2024-07-13 21:08:43.955152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.255 [2024-07-13 21:08:43.955169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:30.255 [2024-07-13 21:08:43.955183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:30.255 [2024-07-13 21:08:43.955194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.255 [2024-07-13 21:08:43.955232] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:30.255 [2024-07-13 21:08:43.955380] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:30.255 [2024-07-13 21:08:43.955403] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:30.255 [2024-07-13 21:08:43.955419] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:30.255 [2024-07-13 21:08:43.955435] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:30.255 [2024-07-13 21:08:43.955449] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:30.255 [2024-07-13 21:08:43.955463] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:30.255 [2024-07-13 21:08:43.955474] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:30.255 [2024-07-13 21:08:43.955490] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:30.255 [2024-07-13 21:08:43.955501] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:30.255 [2024-07-13 21:08:43.955514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.255 [2024-07-13 21:08:43.955525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:30.255 [2024-07-13 21:08:43.955539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:17:30.255 [2024-07-13 21:08:43.955549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.255 [2024-07-13 21:08:43.955633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.255 [2024-07-13 21:08:43.955647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:30.255 [2024-07-13 21:08:43.955661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:30.255 [2024-07-13 21:08:43.955672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.255 [2024-07-13 21:08:43.955774] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:30.255 [2024-07-13 21:08:43.955789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:30.255 [2024-07-13 21:08:43.955803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.255 [2024-07-13 21:08:43.955815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.255 [2024-07-13 21:08:43.955828] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:30.255 [2024-07-13 21:08:43.955876] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:30.255 [2024-07-13 21:08:43.955892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:30.255 [2024-07-13 21:08:43.955903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:30.255 [2024-07-13 21:08:43.955916] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:30.256 [2024-07-13 21:08:43.955927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.256 [2024-07-13 21:08:43.955939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:30.256 [2024-07-13 21:08:43.955951] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:30.256 [2024-07-13 21:08:43.955965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.256 [2024-07-13 21:08:43.955976] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:30.256 [2024-07-13 21:08:43.955989] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:30.256 [2024-07-13 21:08:43.955999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.256 [2024-07-13 21:08:43.956014] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:30.256 [2024-07-13 21:08:43.956025] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:30.256 [2024-07-13 21:08:43.956037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.256 [2024-07-13 21:08:43.956047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:30.256 [2024-07-13 21:08:43.956060] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:30.256 [2024-07-13 21:08:43.956082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:30.256 [2024-07-13 21:08:43.956114] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:30.256 [2024-07-13 21:08:43.956126] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:30.256 [2024-07-13 21:08:43.956139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:30.256 [2024-07-13 21:08:43.956150] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:30.256 [2024-07-13 21:08:43.956164] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:30.256 [2024-07-13 21:08:43.956175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:30.256 [2024-07-13 21:08:43.956188] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:30.256 [2024-07-13 21:08:43.956200] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:30.256 [2024-07-13 21:08:43.956229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:30.256 [2024-07-13 21:08:43.956240] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:30.256 [2024-07-13 21:08:43.956254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:30.256 [2024-07-13 21:08:43.956265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:30.256 [2024-07-13 21:08:43.956278] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:30.256 [2024-07-13 21:08:43.956289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:30.256 [2024-07-13 21:08:43.956302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.256 [2024-07-13 21:08:43.956312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:30.256 [2024-07-13 21:08:43.956327] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:30.256 [2024-07-13 21:08:43.956339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.256 [2024-07-13 21:08:43.956351] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:30.256 [2024-07-13 21:08:43.956363] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:30.256 [2024-07-13 21:08:43.956376] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.256 [2024-07-13 21:08:43.956388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.256 [2024-07-13 21:08:43.956416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:30.256 [2024-07-13 21:08:43.956427] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:30.256 [2024-07-13 21:08:43.956439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:30.256 [2024-07-13 21:08:43.956451] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:30.256 [2024-07-13 21:08:43.956479] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:30.256 [2024-07-13 21:08:43.956490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:30.256 [2024-07-13 21:08:43.956504] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:30.256 [2024-07-13 21:08:43.956518] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.256 [2024-07-13 21:08:43.956535] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:30.256 [2024-07-13 21:08:43.956548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:30.256 [2024-07-13 21:08:43.956561] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:30.256 [2024-07-13 21:08:43.956572] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:30.256 [2024-07-13 21:08:43.956585] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:30.256 [2024-07-13 21:08:43.956597] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:30.256 [2024-07-13 21:08:43.956610] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:30.256 [2024-07-13 21:08:43.956621] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:30.256 [2024-07-13 21:08:43.956656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:30.256 [2024-07-13 21:08:43.956668] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:30.256 [2024-07-13 21:08:43.956681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:30.256 [2024-07-13 21:08:43.956693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:30.256 [2024-07-13 21:08:43.956710] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:30.256 [2024-07-13 21:08:43.956721] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:30.256 [2024-07-13 21:08:43.956735] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.256 [2024-07-13 21:08:43.956748] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:30.256 [2024-07-13 21:08:43.956761] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:30.256 [2024-07-13 21:08:43.956773] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:30.256 [2024-07-13 21:08:43.956786] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:30.256 [2024-07-13 21:08:43.956799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.256 [2024-07-13 21:08:43.956812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:30.256 [2024-07-13 21:08:43.956824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:17:30.256 [2024-07-13 21:08:43.956837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.256 [2024-07-13 21:08:43.974381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.256 [2024-07-13 21:08:43.974599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.256 [2024-07-13 21:08:43.974721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.446 ms 00:17:30.256 [2024-07-13 21:08:43.974777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.256 [2024-07-13 21:08:43.975082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.256 [2024-07-13 21:08:43.975227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:30.256 [2024-07-13 21:08:43.975352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:30.256 [2024-07-13 21:08:43.975412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.256 [2024-07-13 21:08:44.013421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.256 [2024-07-13 21:08:44.013673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.256 [2024-07-13 21:08:44.013830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.867 ms 00:17:30.256 [2024-07-13 21:08:44.013907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.256 [2024-07-13 21:08:44.014049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.256 [2024-07-13 21:08:44.014136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.256 [2024-07-13 21:08:44.014179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:30.256 [2024-07-13 21:08:44.014294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.256 [2024-07-13 21:08:44.014681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.256 [2024-07-13 21:08:44.014826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.256 [2024-07-13 21:08:44.014951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:17:30.256 [2024-07-13 21:08:44.015005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.256 [2024-07-13 21:08:44.015247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.256 [2024-07-13 21:08:44.015317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.256 [2024-07-13 21:08:44.015460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:17:30.256 [2024-07-13 21:08:44.015516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.256 [2024-07-13 21:08:44.043098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.256 [2024-07-13 21:08:44.043329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.256 [2024-07-13 21:08:44.043462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.509 ms 00:17:30.256 [2024-07-13 21:08:44.043521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.256 [2024-07-13 21:08:44.057028] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:30.256 [2024-07-13 21:08:44.070992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.256 [2024-07-13 21:08:44.071277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:30.256 [2024-07-13 21:08:44.071421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.141 ms 00:17:30.256 [2024-07-13 21:08:44.071474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.256 [2024-07-13 21:08:44.141896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.256 [2024-07-13 21:08:44.142156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:30.256 [2024-07-13 21:08:44.142283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.186 ms 00:17:30.256 [2024-07-13 21:08:44.142405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.256 [2024-07-13 21:08:44.142548] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:30.256 [2024-07-13 21:08:44.142754] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:32.788 [2024-07-13 21:08:46.401465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.788 [2024-07-13 21:08:46.401755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:32.788 [2024-07-13 21:08:46.401919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2258.929 ms 00:17:32.788 [2024-07-13 21:08:46.402038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.788 [2024-07-13 21:08:46.402368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.788 [2024-07-13 21:08:46.402502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:32.788 [2024-07-13 21:08:46.402534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:17:32.788 [2024-07-13 21:08:46.402552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.788 [2024-07-13 21:08:46.433799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.788 [2024-07-13 21:08:46.433868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:32.788 [2024-07-13 21:08:46.433909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.196 ms 00:17:32.788 [2024-07-13 21:08:46.433922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.788 [2024-07-13 21:08:46.464115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.788 [2024-07-13 21:08:46.464158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:32.788 [2024-07-13 21:08:46.464182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.093 ms 00:17:32.788 [2024-07-13 21:08:46.464195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.788 [2024-07-13 21:08:46.464651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.788 [2024-07-13 21:08:46.464677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:32.788 [2024-07-13 21:08:46.464710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:17:32.788 [2024-07-13 21:08:46.464723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.788 [2024-07-13 21:08:46.541461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.788 [2024-07-13 21:08:46.541523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:32.788 [2024-07-13 21:08:46.541548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.690 ms 00:17:32.788 [2024-07-13 21:08:46.541561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.788 [2024-07-13 21:08:46.573511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.788 [2024-07-13 21:08:46.573555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:32.788 [2024-07-13 21:08:46.573597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.836 ms 00:17:32.789 [2024-07-13 21:08:46.573609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.789 [2024-07-13 21:08:46.577629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.789 [2024-07-13 21:08:46.577668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:32.789 [2024-07-13 21:08:46.577706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.927 ms 00:17:32.789 [2024-07-13 21:08:46.577718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.789 [2024-07-13 21:08:46.608618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.789 [2024-07-13 21:08:46.608675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:32.789 [2024-07-13 21:08:46.608715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.833 ms 00:17:32.789 [2024-07-13 21:08:46.608728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.789 [2024-07-13 21:08:46.608828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.789 [2024-07-13 21:08:46.608886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:32.789 [2024-07-13 21:08:46.608903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:32.789 [2024-07-13 21:08:46.608915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.789 [2024-07-13 21:08:46.609037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.789 [2024-07-13 21:08:46.609056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:32.789 [2024-07-13 21:08:46.609072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:32.789 [2024-07-13 21:08:46.609084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.789 [2024-07-13 21:08:46.610052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:32.789 [2024-07-13 21:08:46.614102] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2670.908 ms, result 0 00:17:32.789 [2024-07-13 21:08:46.615037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.789 { 00:17:32.789 "name": "ftl0", 00:17:32.789 "uuid": "f37e351e-d87e-4dea-9055-a9b6d8866d11" 00:17:32.789 } 00:17:32.789 21:08:46 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:32.789 21:08:46 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:17:32.789 21:08:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:32.789 21:08:46 -- common/autotest_common.sh@889 -- # local i 00:17:32.789 21:08:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:32.789 21:08:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:32.789 21:08:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:33.047 21:08:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:33.306 [ 00:17:33.306 { 00:17:33.306 "name": "ftl0", 00:17:33.306 "aliases": [ 00:17:33.306 "f37e351e-d87e-4dea-9055-a9b6d8866d11" 00:17:33.306 ], 00:17:33.306 "product_name": "FTL disk", 00:17:33.306 "block_size": 4096, 00:17:33.306 "num_blocks": 23592960, 00:17:33.306 "uuid": "f37e351e-d87e-4dea-9055-a9b6d8866d11", 00:17:33.306 "assigned_rate_limits": { 00:17:33.306 "rw_ios_per_sec": 0, 00:17:33.306 "rw_mbytes_per_sec": 0, 00:17:33.306 "r_mbytes_per_sec": 0, 00:17:33.306 "w_mbytes_per_sec": 0 00:17:33.306 }, 00:17:33.306 "claimed": false, 00:17:33.306 "zoned": false, 00:17:33.306 "supported_io_types": { 00:17:33.306 "read": true, 00:17:33.306 "write": true, 00:17:33.306 "unmap": true, 00:17:33.306 "write_zeroes": true, 00:17:33.306 "flush": true, 00:17:33.306 "reset": false, 00:17:33.306 "compare": false, 00:17:33.306 "compare_and_write": false, 00:17:33.306 "abort": false, 00:17:33.306 "nvme_admin": false, 00:17:33.306 "nvme_io": false 00:17:33.306 }, 00:17:33.306 "driver_specific": { 00:17:33.306 "ftl": { 00:17:33.306 "base_bdev": "fe2bee9e-3884-4acf-b459-e2ae55487f8c", 00:17:33.306 "cache": "nvc0n1p0" 00:17:33.306 } 00:17:33.306 } 00:17:33.306 } 00:17:33.306 ] 00:17:33.306 21:08:47 -- common/autotest_common.sh@895 -- # return 0 00:17:33.306 21:08:47 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:33.306 21:08:47 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:33.564 21:08:47 -- ftl/trim.sh@56 -- # echo ']}' 00:17:33.564 21:08:47 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:33.823 21:08:47 -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:33.823 { 00:17:33.823 "name": "ftl0", 00:17:33.823 "aliases": [ 00:17:33.823 "f37e351e-d87e-4dea-9055-a9b6d8866d11" 00:17:33.823 ], 00:17:33.823 "product_name": "FTL disk", 00:17:33.823 "block_size": 4096, 00:17:33.823 "num_blocks": 23592960, 00:17:33.823 "uuid": "f37e351e-d87e-4dea-9055-a9b6d8866d11", 00:17:33.823 "assigned_rate_limits": { 00:17:33.823 "rw_ios_per_sec": 0, 00:17:33.823 "rw_mbytes_per_sec": 0, 00:17:33.823 "r_mbytes_per_sec": 0, 00:17:33.823 "w_mbytes_per_sec": 0 00:17:33.823 }, 00:17:33.823 "claimed": false, 00:17:33.823 "zoned": false, 00:17:33.823 "supported_io_types": { 00:17:33.823 "read": true, 00:17:33.823 "write": true, 00:17:33.823 "unmap": true, 00:17:33.823 "write_zeroes": true, 00:17:33.823 "flush": true, 00:17:33.823 "reset": false, 00:17:33.823 "compare": false, 00:17:33.823 "compare_and_write": false, 00:17:33.823 "abort": false, 00:17:33.823 "nvme_admin": false, 00:17:33.823 "nvme_io": false 00:17:33.823 }, 00:17:33.823 "driver_specific": { 00:17:33.823 "ftl": { 00:17:33.823 "base_bdev": "fe2bee9e-3884-4acf-b459-e2ae55487f8c", 00:17:33.823 "cache": "nvc0n1p0" 00:17:33.823 } 00:17:33.823 } 00:17:33.823 } 00:17:33.823 ]' 00:17:33.823 21:08:47 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:33.823 21:08:47 -- ftl/trim.sh@60 -- # nb=23592960 00:17:33.823 21:08:47 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:34.081 [2024-07-13 21:08:47.930426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.081 [2024-07-13 21:08:47.930716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:34.081 [2024-07-13 21:08:47.930749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:34.081 [2024-07-13 21:08:47.930766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.081 [2024-07-13 21:08:47.930827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:34.081 [2024-07-13 21:08:47.934263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.081 [2024-07-13 21:08:47.934294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:34.081 [2024-07-13 21:08:47.934328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.390 ms 00:17:34.082 [2024-07-13 21:08:47.934340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.082 [2024-07-13 21:08:47.934952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.082 [2024-07-13 21:08:47.934977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:34.082 [2024-07-13 21:08:47.934996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:17:34.082 [2024-07-13 21:08:47.935008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.082 [2024-07-13 21:08:47.938819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.082 [2024-07-13 21:08:47.938877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:34.082 [2024-07-13 21:08:47.938899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.773 ms 00:17:34.082 [2024-07-13 21:08:47.938912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.082 [2024-07-13 21:08:47.946713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.082 [2024-07-13 21:08:47.946748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:34.082 [2024-07-13 21:08:47.946782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.724 ms 00:17:34.082 [2024-07-13 21:08:47.946794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.082 [2024-07-13 21:08:47.978263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.082 [2024-07-13 21:08:47.978304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:34.082 [2024-07-13 21:08:47.978340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.339 ms 00:17:34.082 [2024-07-13 21:08:47.978351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.082 [2024-07-13 21:08:47.996801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.082 [2024-07-13 21:08:47.996873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:34.082 [2024-07-13 21:08:47.996911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.357 ms 00:17:34.082 [2024-07-13 21:08:47.996924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.082 [2024-07-13 21:08:47.997185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.082 [2024-07-13 21:08:47.997218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:34.082 [2024-07-13 21:08:47.997237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:34.082 [2024-07-13 21:08:47.997253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.341 [2024-07-13 21:08:48.028545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.341 [2024-07-13 21:08:48.028595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:34.341 [2024-07-13 21:08:48.028632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.252 ms 00:17:34.341 [2024-07-13 21:08:48.028644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.341 [2024-07-13 21:08:48.058650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.341 [2024-07-13 21:08:48.058689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:34.341 [2024-07-13 21:08:48.058726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.912 ms 00:17:34.341 [2024-07-13 21:08:48.058738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.341 [2024-07-13 21:08:48.088548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.341 [2024-07-13 21:08:48.088586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:34.341 [2024-07-13 21:08:48.088621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.718 ms 00:17:34.341 [2024-07-13 21:08:48.088633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.341 [2024-07-13 21:08:48.117979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.341 [2024-07-13 21:08:48.118019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:34.341 [2024-07-13 21:08:48.118040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.203 ms 00:17:34.341 [2024-07-13 21:08:48.118051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.341 [2024-07-13 21:08:48.118145] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:34.341 [2024-07-13 21:08:48.118170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:34.341 [2024-07-13 21:08:48.118190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:34.341 [2024-07-13 21:08:48.118202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:34.341 [2024-07-13 21:08:48.118216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:34.341 [2024-07-13 21:08:48.118228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:34.341 [2024-07-13 21:08:48.118241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:34.341 [2024-07-13 21:08:48.118253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:34.341 [2024-07-13 21:08:48.118266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:34.341 [2024-07-13 21:08:48.118278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.118998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:34.342 [2024-07-13 21:08:48.119619] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:34.342 [2024-07-13 21:08:48.119634] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f37e351e-d87e-4dea-9055-a9b6d8866d11 00:17:34.342 [2024-07-13 21:08:48.119647] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:34.343 [2024-07-13 21:08:48.119660] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:34.343 [2024-07-13 21:08:48.119672] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:34.343 [2024-07-13 21:08:48.119685] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:34.343 [2024-07-13 21:08:48.119697] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:34.343 [2024-07-13 21:08:48.119711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:34.343 [2024-07-13 21:08:48.119723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:34.343 [2024-07-13 21:08:48.119737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:34.343 [2024-07-13 21:08:48.119747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:34.343 [2024-07-13 21:08:48.119761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.343 [2024-07-13 21:08:48.119774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:34.343 [2024-07-13 21:08:48.119788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.621 ms 00:17:34.343 [2024-07-13 21:08:48.119803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.343 [2024-07-13 21:08:48.136233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.343 [2024-07-13 21:08:48.136275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:34.343 [2024-07-13 21:08:48.136314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.381 ms 00:17:34.343 [2024-07-13 21:08:48.136326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.343 [2024-07-13 21:08:48.136649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.343 [2024-07-13 21:08:48.136675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:34.343 [2024-07-13 21:08:48.136692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:17:34.343 [2024-07-13 21:08:48.136704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.343 [2024-07-13 21:08:48.192500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.343 [2024-07-13 21:08:48.192558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.343 [2024-07-13 21:08:48.192598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.343 [2024-07-13 21:08:48.192610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.343 [2024-07-13 21:08:48.192749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.343 [2024-07-13 21:08:48.192769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.343 [2024-07-13 21:08:48.192784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.343 [2024-07-13 21:08:48.192795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.343 [2024-07-13 21:08:48.192911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.343 [2024-07-13 21:08:48.192931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.343 [2024-07-13 21:08:48.192946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.343 [2024-07-13 21:08:48.192958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.343 [2024-07-13 21:08:48.193013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.343 [2024-07-13 21:08:48.193027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.343 [2024-07-13 21:08:48.193041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.343 [2024-07-13 21:08:48.193055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.615 [2024-07-13 21:08:48.301482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.615 [2024-07-13 21:08:48.301545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.615 [2024-07-13 21:08:48.301585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.615 [2024-07-13 21:08:48.301597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.615 [2024-07-13 21:08:48.338580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.615 [2024-07-13 21:08:48.338624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.615 [2024-07-13 21:08:48.338648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.615 [2024-07-13 21:08:48.338661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.615 [2024-07-13 21:08:48.338750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.615 [2024-07-13 21:08:48.338769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.615 [2024-07-13 21:08:48.338784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.615 [2024-07-13 21:08:48.338796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.615 [2024-07-13 21:08:48.338888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.615 [2024-07-13 21:08:48.338905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.615 [2024-07-13 21:08:48.338920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.615 [2024-07-13 21:08:48.338932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.615 [2024-07-13 21:08:48.339083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.615 [2024-07-13 21:08:48.339102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.615 [2024-07-13 21:08:48.339130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.615 [2024-07-13 21:08:48.339144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.615 [2024-07-13 21:08:48.339252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.615 [2024-07-13 21:08:48.339275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:34.615 [2024-07-13 21:08:48.339292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.615 [2024-07-13 21:08:48.339304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.615 [2024-07-13 21:08:48.339368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.615 [2024-07-13 21:08:48.339390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.615 [2024-07-13 21:08:48.339405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.615 [2024-07-13 21:08:48.339418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.615 [2024-07-13 21:08:48.339489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.615 [2024-07-13 21:08:48.339505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.615 [2024-07-13 21:08:48.339520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.615 [2024-07-13 21:08:48.339532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.615 [2024-07-13 21:08:48.339760] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 409.302 ms, result 0 00:17:34.615 true 00:17:34.615 21:08:48 -- ftl/trim.sh@63 -- # killprocess 72542 00:17:34.615 21:08:48 -- common/autotest_common.sh@926 -- # '[' -z 72542 ']' 00:17:34.615 21:08:48 -- common/autotest_common.sh@930 -- # kill -0 72542 00:17:34.615 21:08:48 -- common/autotest_common.sh@931 -- # uname 00:17:34.615 21:08:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:34.615 21:08:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72542 00:17:34.615 killing process with pid 72542 00:17:34.615 21:08:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:34.615 21:08:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:34.615 21:08:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72542' 00:17:34.615 21:08:48 -- common/autotest_common.sh@945 -- # kill 72542 00:17:34.615 21:08:48 -- common/autotest_common.sh@950 -- # wait 72542 00:17:38.797 21:08:52 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:40.181 65536+0 records in 00:17:40.181 65536+0 records out 00:17:40.181 268435456 bytes (268 MB, 256 MiB) copied, 1.06734 s, 251 MB/s 00:17:40.181 21:08:53 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:40.181 [2024-07-13 21:08:53.832352] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:40.181 [2024-07-13 21:08:53.832544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72754 ] 00:17:40.181 [2024-07-13 21:08:53.989120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.439 [2024-07-13 21:08:54.145827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.700 [2024-07-13 21:08:54.413970] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.700 [2024-07-13 21:08:54.414060] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.700 [2024-07-13 21:08:54.569991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.700 [2024-07-13 21:08:54.570042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:40.700 [2024-07-13 21:08:54.570076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:40.700 [2024-07-13 21:08:54.570090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.700 [2024-07-13 21:08:54.573077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.700 [2024-07-13 21:08:54.573116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.700 [2024-07-13 21:08:54.573148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.963 ms 00:17:40.700 [2024-07-13 21:08:54.573162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.700 [2024-07-13 21:08:54.573286] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:40.700 [2024-07-13 21:08:54.574260] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:40.700 [2024-07-13 21:08:54.574314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.700 [2024-07-13 21:08:54.574347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.700 [2024-07-13 21:08:54.574358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:17:40.700 [2024-07-13 21:08:54.574368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.700 [2024-07-13 21:08:54.575579] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:40.700 [2024-07-13 21:08:54.590247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.700 [2024-07-13 21:08:54.590287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:40.700 [2024-07-13 21:08:54.590321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.669 ms 00:17:40.700 [2024-07-13 21:08:54.590331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.700 [2024-07-13 21:08:54.590443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.700 [2024-07-13 21:08:54.590463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:40.700 [2024-07-13 21:08:54.590478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:40.700 [2024-07-13 21:08:54.590489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.700 [2024-07-13 21:08:54.595110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.700 [2024-07-13 21:08:54.595149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.700 [2024-07-13 21:08:54.595180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.569 ms 00:17:40.700 [2024-07-13 21:08:54.595190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.700 [2024-07-13 21:08:54.595318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.700 [2024-07-13 21:08:54.595341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.700 [2024-07-13 21:08:54.595353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:40.700 [2024-07-13 21:08:54.595363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.700 [2024-07-13 21:08:54.595398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.700 [2024-07-13 21:08:54.595411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:40.700 [2024-07-13 21:08:54.595423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:40.700 [2024-07-13 21:08:54.595432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.700 [2024-07-13 21:08:54.595464] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:40.700 [2024-07-13 21:08:54.599803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.700 [2024-07-13 21:08:54.599864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.700 [2024-07-13 21:08:54.599882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.351 ms 00:17:40.700 [2024-07-13 21:08:54.599893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.700 [2024-07-13 21:08:54.599995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.700 [2024-07-13 21:08:54.600022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:40.700 [2024-07-13 21:08:54.600033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:40.700 [2024-07-13 21:08:54.600043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.700 [2024-07-13 21:08:54.600074] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:40.700 [2024-07-13 21:08:54.600153] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:40.700 [2024-07-13 21:08:54.600195] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:40.700 [2024-07-13 21:08:54.600214] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:40.700 [2024-07-13 21:08:54.600303] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:40.700 [2024-07-13 21:08:54.600321] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:40.700 [2024-07-13 21:08:54.600345] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:40.700 [2024-07-13 21:08:54.600370] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:40.700 [2024-07-13 21:08:54.600393] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:40.700 [2024-07-13 21:08:54.600425] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:40.700 [2024-07-13 21:08:54.600443] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:40.700 [2024-07-13 21:08:54.600461] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:40.700 [2024-07-13 21:08:54.600480] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:40.700 [2024-07-13 21:08:54.600499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.700 [2024-07-13 21:08:54.600534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:40.700 [2024-07-13 21:08:54.600555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:17:40.700 [2024-07-13 21:08:54.600590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.700 [2024-07-13 21:08:54.600755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.700 [2024-07-13 21:08:54.600795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:40.700 [2024-07-13 21:08:54.600819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:40.700 [2024-07-13 21:08:54.600882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.700 [2024-07-13 21:08:54.601007] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:40.700 [2024-07-13 21:08:54.601046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:40.700 [2024-07-13 21:08:54.601066] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.700 [2024-07-13 21:08:54.601101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.700 [2024-07-13 21:08:54.601119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:40.700 [2024-07-13 21:08:54.601136] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:40.700 [2024-07-13 21:08:54.601153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:40.700 [2024-07-13 21:08:54.601169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:40.700 [2024-07-13 21:08:54.601186] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:40.700 [2024-07-13 21:08:54.601202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.700 [2024-07-13 21:08:54.601233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:40.700 [2024-07-13 21:08:54.601249] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:40.700 [2024-07-13 21:08:54.601279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.700 [2024-07-13 21:08:54.601294] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:40.700 [2024-07-13 21:08:54.601310] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:40.700 [2024-07-13 21:08:54.601327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.700 [2024-07-13 21:08:54.601344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:40.700 [2024-07-13 21:08:54.601363] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:40.700 [2024-07-13 21:08:54.601381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.700 [2024-07-13 21:08:54.601416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:40.700 [2024-07-13 21:08:54.601432] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:40.700 [2024-07-13 21:08:54.601449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:40.700 [2024-07-13 21:08:54.601467] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:40.700 [2024-07-13 21:08:54.601483] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:40.700 [2024-07-13 21:08:54.601500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:40.700 [2024-07-13 21:08:54.601516] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:40.700 [2024-07-13 21:08:54.601534] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:40.700 [2024-07-13 21:08:54.601551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:40.700 [2024-07-13 21:08:54.601567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:40.700 [2024-07-13 21:08:54.601616] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:40.700 [2024-07-13 21:08:54.601634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:40.700 [2024-07-13 21:08:54.601652] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:40.700 [2024-07-13 21:08:54.601670] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:40.700 [2024-07-13 21:08:54.601687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:40.700 [2024-07-13 21:08:54.601704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:40.700 [2024-07-13 21:08:54.601723] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:40.700 [2024-07-13 21:08:54.601743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.700 [2024-07-13 21:08:54.601776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:40.701 [2024-07-13 21:08:54.601795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:40.701 [2024-07-13 21:08:54.601813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.701 [2024-07-13 21:08:54.601831] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:40.701 [2024-07-13 21:08:54.601852] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:40.701 [2024-07-13 21:08:54.601871] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.701 [2024-07-13 21:08:54.601892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.701 [2024-07-13 21:08:54.601937] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:40.701 [2024-07-13 21:08:54.601972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:40.701 [2024-07-13 21:08:54.602005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:40.701 [2024-07-13 21:08:54.602023] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:40.701 [2024-07-13 21:08:54.602041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:40.701 [2024-07-13 21:08:54.602060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:40.701 [2024-07-13 21:08:54.602079] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:40.701 [2024-07-13 21:08:54.602110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.701 [2024-07-13 21:08:54.602130] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:40.701 [2024-07-13 21:08:54.602148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:40.701 [2024-07-13 21:08:54.602166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:40.701 [2024-07-13 21:08:54.602185] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:40.701 [2024-07-13 21:08:54.602202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:40.701 [2024-07-13 21:08:54.602219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:40.701 [2024-07-13 21:08:54.602236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:40.701 [2024-07-13 21:08:54.602253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:40.701 [2024-07-13 21:08:54.602271] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:40.701 [2024-07-13 21:08:54.602290] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:40.701 [2024-07-13 21:08:54.602307] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:40.701 [2024-07-13 21:08:54.602327] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:40.701 [2024-07-13 21:08:54.602346] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:40.701 [2024-07-13 21:08:54.602365] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:40.701 [2024-07-13 21:08:54.602386] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.701 [2024-07-13 21:08:54.602408] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:40.701 [2024-07-13 21:08:54.602427] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:40.701 [2024-07-13 21:08:54.602446] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:40.701 [2024-07-13 21:08:54.602467] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:40.701 [2024-07-13 21:08:54.602489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.701 [2024-07-13 21:08:54.602521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:40.701 [2024-07-13 21:08:54.602542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:17:40.701 [2024-07-13 21:08:54.602562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.701 [2024-07-13 21:08:54.621696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.701 [2024-07-13 21:08:54.621747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.701 [2024-07-13 21:08:54.621781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.021 ms 00:17:40.701 [2024-07-13 21:08:54.621792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.701 [2024-07-13 21:08:54.622009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.701 [2024-07-13 21:08:54.622030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.701 [2024-07-13 21:08:54.622043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:40.701 [2024-07-13 21:08:54.622054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.961 [2024-07-13 21:08:54.669341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.961 [2024-07-13 21:08:54.669394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.961 [2024-07-13 21:08:54.669429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.256 ms 00:17:40.961 [2024-07-13 21:08:54.669440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.961 [2024-07-13 21:08:54.669552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.961 [2024-07-13 21:08:54.669570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.961 [2024-07-13 21:08:54.669582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:40.961 [2024-07-13 21:08:54.669592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.961 [2024-07-13 21:08:54.669955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.961 [2024-07-13 21:08:54.669974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.961 [2024-07-13 21:08:54.669986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:17:40.961 [2024-07-13 21:08:54.669996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.961 [2024-07-13 21:08:54.670134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.961 [2024-07-13 21:08:54.670151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.961 [2024-07-13 21:08:54.670162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:40.961 [2024-07-13 21:08:54.670172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.961 [2024-07-13 21:08:54.686424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.961 [2024-07-13 21:08:54.686467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.961 [2024-07-13 21:08:54.686500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.224 ms 00:17:40.961 [2024-07-13 21:08:54.686511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.961 [2024-07-13 21:08:54.701645] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:40.961 [2024-07-13 21:08:54.701687] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:40.961 [2024-07-13 21:08:54.701724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.961 [2024-07-13 21:08:54.701735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:40.961 [2024-07-13 21:08:54.701747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.055 ms 00:17:40.961 [2024-07-13 21:08:54.701757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.962 [2024-07-13 21:08:54.728962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.962 [2024-07-13 21:08:54.729002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:40.962 [2024-07-13 21:08:54.729034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.087 ms 00:17:40.962 [2024-07-13 21:08:54.729045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.962 [2024-07-13 21:08:54.743501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.962 [2024-07-13 21:08:54.743538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:40.962 [2024-07-13 21:08:54.743570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.361 ms 00:17:40.962 [2024-07-13 21:08:54.743580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.962 [2024-07-13 21:08:54.757845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.962 [2024-07-13 21:08:54.757880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:40.962 [2024-07-13 21:08:54.757932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.184 ms 00:17:40.962 [2024-07-13 21:08:54.757942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.962 [2024-07-13 21:08:54.758382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.962 [2024-07-13 21:08:54.758409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:40.962 [2024-07-13 21:08:54.758438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:17:40.962 [2024-07-13 21:08:54.758480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.962 [2024-07-13 21:08:54.828002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.962 [2024-07-13 21:08:54.828061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:40.962 [2024-07-13 21:08:54.828091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.488 ms 00:17:40.962 [2024-07-13 21:08:54.828105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.962 [2024-07-13 21:08:54.840697] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:40.962 [2024-07-13 21:08:54.854354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.962 [2024-07-13 21:08:54.854421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:40.962 [2024-07-13 21:08:54.854455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.092 ms 00:17:40.962 [2024-07-13 21:08:54.854467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.962 [2024-07-13 21:08:54.854604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.962 [2024-07-13 21:08:54.854630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:40.962 [2024-07-13 21:08:54.854643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:40.962 [2024-07-13 21:08:54.854654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.962 [2024-07-13 21:08:54.854741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.962 [2024-07-13 21:08:54.854757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:40.962 [2024-07-13 21:08:54.854768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:40.962 [2024-07-13 21:08:54.854781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.962 [2024-07-13 21:08:54.856953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.962 [2024-07-13 21:08:54.856991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:40.962 [2024-07-13 21:08:54.857021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.143 ms 00:17:40.962 [2024-07-13 21:08:54.857032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.962 [2024-07-13 21:08:54.857070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.962 [2024-07-13 21:08:54.857083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:40.962 [2024-07-13 21:08:54.857094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:40.962 [2024-07-13 21:08:54.857104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.962 [2024-07-13 21:08:54.857146] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:40.962 [2024-07-13 21:08:54.857160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.962 [2024-07-13 21:08:54.857170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:40.962 [2024-07-13 21:08:54.857180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:40.962 [2024-07-13 21:08:54.857193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-07-13 21:08:54.885665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.221 [2024-07-13 21:08:54.885885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:41.221 [2024-07-13 21:08:54.886006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.444 ms 00:17:41.221 [2024-07-13 21:08:54.886067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-07-13 21:08:54.886347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.221 [2024-07-13 21:08:54.886533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:41.221 [2024-07-13 21:08:54.886682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:41.221 [2024-07-13 21:08:54.886792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.221 [2024-07-13 21:08:54.887964] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.221 [2024-07-13 21:08:54.892060] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 317.526 ms, result 0 00:17:41.221 [2024-07-13 21:08:54.892952] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.221 [2024-07-13 21:08:54.908220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:52.417  Copying: 22/256 [MB] (22 MBps) Copying: 45/256 [MB] (22 MBps) Copying: 68/256 [MB] (22 MBps) Copying: 90/256 [MB] (22 MBps) Copying: 113/256 [MB] (22 MBps) Copying: 136/256 [MB] (22 MBps) Copying: 159/256 [MB] (23 MBps) Copying: 182/256 [MB] (22 MBps) Copying: 205/256 [MB] (22 MBps) Copying: 228/256 [MB] (22 MBps) Copying: 250/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-13 21:09:06.142357] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:52.417 [2024-07-13 21:09:06.153699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.417 [2024-07-13 21:09:06.153737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:52.417 [2024-07-13 21:09:06.153771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:52.417 [2024-07-13 21:09:06.153781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.417 [2024-07-13 21:09:06.153815] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:52.417 [2024-07-13 21:09:06.156986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.417 [2024-07-13 21:09:06.157015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:52.417 [2024-07-13 21:09:06.157043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.154 ms 00:17:52.417 [2024-07-13 21:09:06.157053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.417 [2024-07-13 21:09:06.158872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.417 [2024-07-13 21:09:06.158937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:52.417 [2024-07-13 21:09:06.158969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.793 ms 00:17:52.417 [2024-07-13 21:09:06.158980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.417 [2024-07-13 21:09:06.165700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.417 [2024-07-13 21:09:06.165732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:52.417 [2024-07-13 21:09:06.165767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.696 ms 00:17:52.417 [2024-07-13 21:09:06.165777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.417 [2024-07-13 21:09:06.172562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.417 [2024-07-13 21:09:06.172606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:52.417 [2024-07-13 21:09:06.172635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.686 ms 00:17:52.417 [2024-07-13 21:09:06.172645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.417 [2024-07-13 21:09:06.199726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.417 [2024-07-13 21:09:06.199763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:52.417 [2024-07-13 21:09:06.199794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.028 ms 00:17:52.417 [2024-07-13 21:09:06.199804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.417 [2024-07-13 21:09:06.215924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.417 [2024-07-13 21:09:06.215960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:52.417 [2024-07-13 21:09:06.215991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.003 ms 00:17:52.417 [2024-07-13 21:09:06.216006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.417 [2024-07-13 21:09:06.216189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.417 [2024-07-13 21:09:06.216209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:52.417 [2024-07-13 21:09:06.216222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:17:52.417 [2024-07-13 21:09:06.216232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.417 [2024-07-13 21:09:06.243756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.417 [2024-07-13 21:09:06.243793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:52.417 [2024-07-13 21:09:06.243823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.502 ms 00:17:52.417 [2024-07-13 21:09:06.243861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.417 [2024-07-13 21:09:06.271047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.417 [2024-07-13 21:09:06.271084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:52.417 [2024-07-13 21:09:06.271113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.068 ms 00:17:52.417 [2024-07-13 21:09:06.271123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.417 [2024-07-13 21:09:06.298081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.417 [2024-07-13 21:09:06.298117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:52.417 [2024-07-13 21:09:06.298147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.888 ms 00:17:52.417 [2024-07-13 21:09:06.298156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.417 [2024-07-13 21:09:06.325060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.417 [2024-07-13 21:09:06.325096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:52.417 [2024-07-13 21:09:06.325126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.806 ms 00:17:52.417 [2024-07-13 21:09:06.325135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.417 [2024-07-13 21:09:06.325205] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:52.417 [2024-07-13 21:09:06.325232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:52.417 [2024-07-13 21:09:06.325245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:52.417 [2024-07-13 21:09:06.325256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:52.417 [2024-07-13 21:09:06.325266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:52.417 [2024-07-13 21:09:06.325275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.325992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:52.418 [2024-07-13 21:09:06.326238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:52.419 [2024-07-13 21:09:06.326268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:52.419 [2024-07-13 21:09:06.326296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:52.419 [2024-07-13 21:09:06.326321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:52.419 [2024-07-13 21:09:06.326332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:52.419 [2024-07-13 21:09:06.326351] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:52.419 [2024-07-13 21:09:06.326362] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f37e351e-d87e-4dea-9055-a9b6d8866d11 00:17:52.419 [2024-07-13 21:09:06.326400] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:52.419 [2024-07-13 21:09:06.326411] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:52.419 [2024-07-13 21:09:06.326421] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:52.419 [2024-07-13 21:09:06.326432] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:52.419 [2024-07-13 21:09:06.326442] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:52.419 [2024-07-13 21:09:06.326453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:52.419 [2024-07-13 21:09:06.326463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:52.419 [2024-07-13 21:09:06.326472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:52.419 [2024-07-13 21:09:06.326481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:52.419 [2024-07-13 21:09:06.326492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.419 [2024-07-13 21:09:06.326502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:52.419 [2024-07-13 21:09:06.326520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.289 ms 00:17:52.419 [2024-07-13 21:09:06.326530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.341520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.678 [2024-07-13 21:09:06.341553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:52.678 [2024-07-13 21:09:06.341583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.965 ms 00:17:52.678 [2024-07-13 21:09:06.341593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.341830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.678 [2024-07-13 21:09:06.341847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:52.678 [2024-07-13 21:09:06.341915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:17:52.678 [2024-07-13 21:09:06.341926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.384451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.678 [2024-07-13 21:09:06.384510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.678 [2024-07-13 21:09:06.384540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.678 [2024-07-13 21:09:06.384550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.384662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.678 [2024-07-13 21:09:06.384678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.678 [2024-07-13 21:09:06.384689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.678 [2024-07-13 21:09:06.384698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.384768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.678 [2024-07-13 21:09:06.384785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.678 [2024-07-13 21:09:06.384796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.678 [2024-07-13 21:09:06.384806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.384828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.678 [2024-07-13 21:09:06.384840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.678 [2024-07-13 21:09:06.384856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.678 [2024-07-13 21:09:06.384866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.475243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.678 [2024-07-13 21:09:06.475301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.678 [2024-07-13 21:09:06.475333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.678 [2024-07-13 21:09:06.475344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.510653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.678 [2024-07-13 21:09:06.510704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.678 [2024-07-13 21:09:06.510737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.678 [2024-07-13 21:09:06.510748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.510833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.678 [2024-07-13 21:09:06.510888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.678 [2024-07-13 21:09:06.510920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.678 [2024-07-13 21:09:06.510931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.510999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.678 [2024-07-13 21:09:06.511013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.678 [2024-07-13 21:09:06.511024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.678 [2024-07-13 21:09:06.511041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.511160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.678 [2024-07-13 21:09:06.511179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.678 [2024-07-13 21:09:06.511191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.678 [2024-07-13 21:09:06.511202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.511256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.678 [2024-07-13 21:09:06.511273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:52.678 [2024-07-13 21:09:06.511284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.678 [2024-07-13 21:09:06.511302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.511348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.678 [2024-07-13 21:09:06.511369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.678 [2024-07-13 21:09:06.511381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.678 [2024-07-13 21:09:06.511392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.511447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.678 [2024-07-13 21:09:06.511464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.678 [2024-07-13 21:09:06.511476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.678 [2024-07-13 21:09:06.511492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.678 [2024-07-13 21:09:06.511660] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 357.961 ms, result 0 00:17:53.614 00:17:53.614 00:17:53.614 21:09:07 -- ftl/trim.sh@72 -- # svcpid=72902 00:17:53.614 21:09:07 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:53.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.614 21:09:07 -- ftl/trim.sh@73 -- # waitforlisten 72902 00:17:53.614 21:09:07 -- common/autotest_common.sh@819 -- # '[' -z 72902 ']' 00:17:53.614 21:09:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.614 21:09:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:53.614 21:09:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.614 21:09:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:53.614 21:09:07 -- common/autotest_common.sh@10 -- # set +x 00:17:53.873 [2024-07-13 21:09:07.627309] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:53.874 [2024-07-13 21:09:07.627773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72902 ] 00:17:53.874 [2024-07-13 21:09:07.795885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.133 [2024-07-13 21:09:07.965007] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:54.133 [2024-07-13 21:09:07.965397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.507 21:09:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:55.507 21:09:09 -- common/autotest_common.sh@852 -- # return 0 00:17:55.507 21:09:09 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:55.765 [2024-07-13 21:09:09.565657] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:55.765 [2024-07-13 21:09:09.565736] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:56.025 [2024-07-13 21:09:09.730325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.025 [2024-07-13 21:09:09.730377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:56.025 [2024-07-13 21:09:09.730415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:56.025 [2024-07-13 21:09:09.730427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.025 [2024-07-13 21:09:09.733612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.025 [2024-07-13 21:09:09.733651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.025 [2024-07-13 21:09:09.733689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:17:56.025 [2024-07-13 21:09:09.733701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.025 [2024-07-13 21:09:09.733881] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:56.025 [2024-07-13 21:09:09.734971] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:56.025 [2024-07-13 21:09:09.735012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.025 [2024-07-13 21:09:09.735038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.025 [2024-07-13 21:09:09.735052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.184 ms 00:17:56.025 [2024-07-13 21:09:09.735063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.025 [2024-07-13 21:09:09.736365] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:56.025 [2024-07-13 21:09:09.751025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.025 [2024-07-13 21:09:09.751071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:56.025 [2024-07-13 21:09:09.751103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.667 ms 00:17:56.025 [2024-07-13 21:09:09.751116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.025 [2024-07-13 21:09:09.751218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.025 [2024-07-13 21:09:09.751240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:56.025 [2024-07-13 21:09:09.751252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:56.025 [2024-07-13 21:09:09.751264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.025 [2024-07-13 21:09:09.755595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.025 [2024-07-13 21:09:09.755636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.025 [2024-07-13 21:09:09.755666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.275 ms 00:17:56.025 [2024-07-13 21:09:09.755682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.025 [2024-07-13 21:09:09.755828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.025 [2024-07-13 21:09:09.755891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.025 [2024-07-13 21:09:09.755907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:17:56.025 [2024-07-13 21:09:09.755923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.025 [2024-07-13 21:09:09.755978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.025 [2024-07-13 21:09:09.756013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:56.025 [2024-07-13 21:09:09.756027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:56.025 [2024-07-13 21:09:09.756046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.025 [2024-07-13 21:09:09.756112] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:56.025 [2024-07-13 21:09:09.760079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.025 [2024-07-13 21:09:09.760152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.025 [2024-07-13 21:09:09.760170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.002 ms 00:17:56.025 [2024-07-13 21:09:09.760182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.025 [2024-07-13 21:09:09.760250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.025 [2024-07-13 21:09:09.760267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:56.025 [2024-07-13 21:09:09.760281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:56.025 [2024-07-13 21:09:09.760293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.025 [2024-07-13 21:09:09.760323] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:56.025 [2024-07-13 21:09:09.760366] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:56.025 [2024-07-13 21:09:09.760423] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:56.025 [2024-07-13 21:09:09.760442] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:56.025 [2024-07-13 21:09:09.760537] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:56.025 [2024-07-13 21:09:09.760553] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:56.025 [2024-07-13 21:09:09.760568] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:56.025 [2024-07-13 21:09:09.760582] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:56.025 [2024-07-13 21:09:09.760600] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:56.025 [2024-07-13 21:09:09.760612] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:56.025 [2024-07-13 21:09:09.760623] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:56.025 [2024-07-13 21:09:09.760633] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:56.025 [2024-07-13 21:09:09.760647] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:56.025 [2024-07-13 21:09:09.760658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.025 [2024-07-13 21:09:09.760671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:56.025 [2024-07-13 21:09:09.760681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:17:56.025 [2024-07-13 21:09:09.760694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.025 [2024-07-13 21:09:09.760764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.025 [2024-07-13 21:09:09.760779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:56.025 [2024-07-13 21:09:09.760792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:56.025 [2024-07-13 21:09:09.760804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.025 [2024-07-13 21:09:09.760919] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:56.025 [2024-07-13 21:09:09.760940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:56.025 [2024-07-13 21:09:09.760952] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.025 [2024-07-13 21:09:09.760966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.025 [2024-07-13 21:09:09.760977] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:56.025 [2024-07-13 21:09:09.760991] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:56.025 [2024-07-13 21:09:09.761002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:56.025 [2024-07-13 21:09:09.761017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:56.025 [2024-07-13 21:09:09.761028] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:56.025 [2024-07-13 21:09:09.761040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.025 [2024-07-13 21:09:09.761050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:56.025 [2024-07-13 21:09:09.761062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:56.025 [2024-07-13 21:09:09.761073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.025 [2024-07-13 21:09:09.761085] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:56.025 [2024-07-13 21:09:09.761095] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:56.025 [2024-07-13 21:09:09.761109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.025 [2024-07-13 21:09:09.761119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:56.025 [2024-07-13 21:09:09.761131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:56.025 [2024-07-13 21:09:09.761141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.026 [2024-07-13 21:09:09.761153] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:56.026 [2024-07-13 21:09:09.761164] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:56.026 [2024-07-13 21:09:09.761176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:56.026 [2024-07-13 21:09:09.761186] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:56.026 [2024-07-13 21:09:09.761201] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:56.026 [2024-07-13 21:09:09.761211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:56.026 [2024-07-13 21:09:09.761223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:56.026 [2024-07-13 21:09:09.761233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:56.026 [2024-07-13 21:09:09.761245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:56.026 [2024-07-13 21:09:09.761270] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:56.026 [2024-07-13 21:09:09.761281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:56.026 [2024-07-13 21:09:09.761319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:56.026 [2024-07-13 21:09:09.761332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:56.026 [2024-07-13 21:09:09.761342] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:56.026 [2024-07-13 21:09:09.761354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:56.026 [2024-07-13 21:09:09.761364] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:56.026 [2024-07-13 21:09:09.761376] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:56.026 [2024-07-13 21:09:09.761386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.026 [2024-07-13 21:09:09.761397] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:56.026 [2024-07-13 21:09:09.761408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:56.026 [2024-07-13 21:09:09.761422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.026 [2024-07-13 21:09:09.761431] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:56.026 [2024-07-13 21:09:09.761445] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:56.026 [2024-07-13 21:09:09.761455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.026 [2024-07-13 21:09:09.761471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.026 [2024-07-13 21:09:09.761482] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:56.026 [2024-07-13 21:09:09.761494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:56.026 [2024-07-13 21:09:09.761505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:56.026 [2024-07-13 21:09:09.761518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:56.026 [2024-07-13 21:09:09.761528] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:56.026 [2024-07-13 21:09:09.761540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:56.026 [2024-07-13 21:09:09.761551] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:56.026 [2024-07-13 21:09:09.761567] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.026 [2024-07-13 21:09:09.761579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:56.026 [2024-07-13 21:09:09.761592] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:56.026 [2024-07-13 21:09:09.761603] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:56.026 [2024-07-13 21:09:09.761619] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:56.026 [2024-07-13 21:09:09.761631] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:56.026 [2024-07-13 21:09:09.761643] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:56.026 [2024-07-13 21:09:09.761654] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:56.026 [2024-07-13 21:09:09.761667] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:56.026 [2024-07-13 21:09:09.761678] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:56.026 [2024-07-13 21:09:09.761691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:56.026 [2024-07-13 21:09:09.761702] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:56.026 [2024-07-13 21:09:09.761715] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:56.026 [2024-07-13 21:09:09.761726] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:56.026 [2024-07-13 21:09:09.761738] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:56.026 [2024-07-13 21:09:09.761751] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.026 [2024-07-13 21:09:09.761765] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:56.026 [2024-07-13 21:09:09.761776] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:56.026 [2024-07-13 21:09:09.761789] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:56.026 [2024-07-13 21:09:09.761800] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:56.026 [2024-07-13 21:09:09.761816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.761828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:56.026 [2024-07-13 21:09:09.761842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:17:56.026 [2024-07-13 21:09:09.761852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.026 [2024-07-13 21:09:09.780706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.780908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.026 [2024-07-13 21:09:09.781057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.757 ms 00:17:56.026 [2024-07-13 21:09:09.781183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.026 [2024-07-13 21:09:09.781401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.781467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:56.026 [2024-07-13 21:09:09.781613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:56.026 [2024-07-13 21:09:09.781731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.026 [2024-07-13 21:09:09.818260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.818491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.026 [2024-07-13 21:09:09.818644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.369 ms 00:17:56.026 [2024-07-13 21:09:09.818753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.026 [2024-07-13 21:09:09.818924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.818979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.026 [2024-07-13 21:09:09.819079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:56.026 [2024-07-13 21:09:09.819129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.026 [2024-07-13 21:09:09.819485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.819656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.026 [2024-07-13 21:09:09.819781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:56.026 [2024-07-13 21:09:09.819908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.026 [2024-07-13 21:09:09.820125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.820182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.026 [2024-07-13 21:09:09.820383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:56.026 [2024-07-13 21:09:09.820450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.026 [2024-07-13 21:09:09.837003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.837197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.026 [2024-07-13 21:09:09.837355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.454 ms 00:17:56.026 [2024-07-13 21:09:09.837466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.026 [2024-07-13 21:09:09.852212] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:56.026 [2024-07-13 21:09:09.852419] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:56.026 [2024-07-13 21:09:09.852676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.852782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:56.026 [2024-07-13 21:09:09.852810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.018 ms 00:17:56.026 [2024-07-13 21:09:09.852823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.026 [2024-07-13 21:09:09.878800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.878863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:56.026 [2024-07-13 21:09:09.878903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.851 ms 00:17:56.026 [2024-07-13 21:09:09.878915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.026 [2024-07-13 21:09:09.892730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.892765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:56.026 [2024-07-13 21:09:09.892820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.712 ms 00:17:56.026 [2024-07-13 21:09:09.892832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.026 [2024-07-13 21:09:09.906417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.906452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:56.026 [2024-07-13 21:09:09.906493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.468 ms 00:17:56.026 [2024-07-13 21:09:09.906504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.026 [2024-07-13 21:09:09.906994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.026 [2024-07-13 21:09:09.907024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:56.026 [2024-07-13 21:09:09.907044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:17:56.026 [2024-07-13 21:09:09.907056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.285 [2024-07-13 21:09:09.978908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.285 [2024-07-13 21:09:09.979018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:56.285 [2024-07-13 21:09:09.979058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.796 ms 00:17:56.285 [2024-07-13 21:09:09.979076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.285 [2024-07-13 21:09:09.990932] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:56.285 [2024-07-13 21:09:10.003854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.285 [2024-07-13 21:09:10.003944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:56.285 [2024-07-13 21:09:10.003968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.634 ms 00:17:56.285 [2024-07-13 21:09:10.003987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.285 [2024-07-13 21:09:10.004139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.285 [2024-07-13 21:09:10.004172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:56.285 [2024-07-13 21:09:10.004188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:56.285 [2024-07-13 21:09:10.004216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.285 [2024-07-13 21:09:10.004286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.285 [2024-07-13 21:09:10.004309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:56.285 [2024-07-13 21:09:10.004323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:56.285 [2024-07-13 21:09:10.004340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.285 [2024-07-13 21:09:10.006575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.285 [2024-07-13 21:09:10.006725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:56.285 [2024-07-13 21:09:10.006852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.202 ms 00:17:56.285 [2024-07-13 21:09:10.006969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.285 [2024-07-13 21:09:10.007051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.285 [2024-07-13 21:09:10.007143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:56.285 [2024-07-13 21:09:10.007241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:56.285 [2024-07-13 21:09:10.007299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.285 [2024-07-13 21:09:10.007472] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:56.285 [2024-07-13 21:09:10.007619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.285 [2024-07-13 21:09:10.007748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:56.285 [2024-07-13 21:09:10.007880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:56.285 [2024-07-13 21:09:10.007991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.285 [2024-07-13 21:09:10.038528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.285 [2024-07-13 21:09:10.038572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:56.285 [2024-07-13 21:09:10.038606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.455 ms 00:17:56.285 [2024-07-13 21:09:10.038617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.285 [2024-07-13 21:09:10.038739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.285 [2024-07-13 21:09:10.038758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:56.285 [2024-07-13 21:09:10.038772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:56.285 [2024-07-13 21:09:10.038782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.285 [2024-07-13 21:09:10.039898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:56.285 [2024-07-13 21:09:10.043637] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.200 ms, result 0 00:17:56.285 [2024-07-13 21:09:10.045246] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:56.285 Some configs were skipped because the RPC state that can call them passed over. 00:17:56.286 21:09:10 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:56.544 [2024-07-13 21:09:10.315114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.544 [2024-07-13 21:09:10.315171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:56.544 [2024-07-13 21:09:10.315191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.001 ms 00:17:56.544 [2024-07-13 21:09:10.315204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.544 [2024-07-13 21:09:10.315249] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 30.142 ms, result 0 00:17:56.544 true 00:17:56.544 21:09:10 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:56.803 [2024-07-13 21:09:10.606979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.803 [2024-07-13 21:09:10.607213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:56.803 [2024-07-13 21:09:10.607343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.326 ms 00:17:56.803 [2024-07-13 21:09:10.607392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.803 [2024-07-13 21:09:10.607581] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 27.923 ms, result 0 00:17:56.803 true 00:17:56.803 21:09:10 -- ftl/trim.sh@81 -- # killprocess 72902 00:17:56.803 21:09:10 -- common/autotest_common.sh@926 -- # '[' -z 72902 ']' 00:17:56.803 21:09:10 -- common/autotest_common.sh@930 -- # kill -0 72902 00:17:56.803 21:09:10 -- common/autotest_common.sh@931 -- # uname 00:17:56.803 21:09:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:56.803 21:09:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72902 00:17:56.803 21:09:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:56.803 killing process with pid 72902 00:17:56.803 21:09:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:56.803 21:09:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72902' 00:17:56.803 21:09:10 -- common/autotest_common.sh@945 -- # kill 72902 00:17:56.803 21:09:10 -- common/autotest_common.sh@950 -- # wait 72902 00:17:57.740 [2024-07-13 21:09:11.475291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.740 [2024-07-13 21:09:11.475347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:57.740 [2024-07-13 21:09:11.475366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:57.740 [2024-07-13 21:09:11.475379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.740 [2024-07-13 21:09:11.475407] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:57.740 [2024-07-13 21:09:11.478366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.740 [2024-07-13 21:09:11.478388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:57.740 [2024-07-13 21:09:11.478407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.936 ms 00:17:57.740 [2024-07-13 21:09:11.478417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.740 [2024-07-13 21:09:11.478696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.740 [2024-07-13 21:09:11.478710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:57.740 [2024-07-13 21:09:11.478723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:17:57.740 [2024-07-13 21:09:11.478733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.740 [2024-07-13 21:09:11.482890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.740 [2024-07-13 21:09:11.483059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:57.740 [2024-07-13 21:09:11.483130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.113 ms 00:17:57.740 [2024-07-13 21:09:11.483177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.740 [2024-07-13 21:09:11.490183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.740 [2024-07-13 21:09:11.490361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:57.740 [2024-07-13 21:09:11.490478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.923 ms 00:17:57.740 [2024-07-13 21:09:11.490532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.740 [2024-07-13 21:09:11.502603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.740 [2024-07-13 21:09:11.502788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:57.740 [2024-07-13 21:09:11.502961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.967 ms 00:17:57.740 [2024-07-13 21:09:11.503090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.740 [2024-07-13 21:09:11.511389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.740 [2024-07-13 21:09:11.511588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:57.740 [2024-07-13 21:09:11.511727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.204 ms 00:17:57.740 [2024-07-13 21:09:11.511778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.740 [2024-07-13 21:09:11.512064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.740 [2024-07-13 21:09:11.512244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:57.740 [2024-07-13 21:09:11.512367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:57.740 [2024-07-13 21:09:11.512416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.740 [2024-07-13 21:09:11.524838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.740 [2024-07-13 21:09:11.525046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:57.740 [2024-07-13 21:09:11.525176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.293 ms 00:17:57.740 [2024-07-13 21:09:11.525228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.740 [2024-07-13 21:09:11.537056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.740 [2024-07-13 21:09:11.537232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:57.740 [2024-07-13 21:09:11.537354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.671 ms 00:17:57.740 [2024-07-13 21:09:11.537410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.740 [2024-07-13 21:09:11.548630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.740 [2024-07-13 21:09:11.548806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:57.740 [2024-07-13 21:09:11.548968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.140 ms 00:17:57.740 [2024-07-13 21:09:11.549026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.740 [2024-07-13 21:09:11.560436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.740 [2024-07-13 21:09:11.560629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:57.740 [2024-07-13 21:09:11.560746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.224 ms 00:17:57.740 [2024-07-13 21:09:11.560796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.740 [2024-07-13 21:09:11.560899] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:57.740 [2024-07-13 21:09:11.561035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.561109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.561174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.561253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.561385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.561453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.561515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.561652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.561726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.561790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.561905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.561967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.562088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.562261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.562281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:57.740 [2024-07-13 21:09:11.562295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.562994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:57.741 [2024-07-13 21:09:11.563386] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:57.741 [2024-07-13 21:09:11.563413] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f37e351e-d87e-4dea-9055-a9b6d8866d11 00:17:57.741 [2024-07-13 21:09:11.563425] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:57.741 [2024-07-13 21:09:11.563440] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:57.741 [2024-07-13 21:09:11.563450] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:57.741 [2024-07-13 21:09:11.563462] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:57.741 [2024-07-13 21:09:11.563472] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:57.741 [2024-07-13 21:09:11.563484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:57.741 [2024-07-13 21:09:11.563494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:57.741 [2024-07-13 21:09:11.563506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:57.741 [2024-07-13 21:09:11.563515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:57.741 [2024-07-13 21:09:11.563527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.741 [2024-07-13 21:09:11.563538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:57.741 [2024-07-13 21:09:11.563552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:17:57.741 [2024-07-13 21:09:11.563562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.741 [2024-07-13 21:09:11.578858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.742 [2024-07-13 21:09:11.579049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:57.742 [2024-07-13 21:09:11.579197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.247 ms 00:17:57.742 [2024-07-13 21:09:11.579324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.742 [2024-07-13 21:09:11.581001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.742 [2024-07-13 21:09:11.581148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:57.742 [2024-07-13 21:09:11.581286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:17:57.742 [2024-07-13 21:09:11.581336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.742 [2024-07-13 21:09:11.633206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.742 [2024-07-13 21:09:11.633375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.742 [2024-07-13 21:09:11.633544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.742 [2024-07-13 21:09:11.633594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.742 [2024-07-13 21:09:11.633721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.742 [2024-07-13 21:09:11.633770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.742 [2024-07-13 21:09:11.633891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.742 [2024-07-13 21:09:11.633933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.742 [2024-07-13 21:09:11.634037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.742 [2024-07-13 21:09:11.634132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.742 [2024-07-13 21:09:11.634199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.742 [2024-07-13 21:09:11.634250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.742 [2024-07-13 21:09:11.634304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.742 [2024-07-13 21:09:11.634344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.742 [2024-07-13 21:09:11.634382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.742 [2024-07-13 21:09:11.634465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.001 [2024-07-13 21:09:11.723616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.001 [2024-07-13 21:09:11.723882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.001 [2024-07-13 21:09:11.724009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.001 [2024-07-13 21:09:11.724058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.001 [2024-07-13 21:09:11.757490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.001 [2024-07-13 21:09:11.757654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.001 [2024-07-13 21:09:11.757816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.001 [2024-07-13 21:09:11.757913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.001 [2024-07-13 21:09:11.758108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.001 [2024-07-13 21:09:11.758159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.001 [2024-07-13 21:09:11.758202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.001 [2024-07-13 21:09:11.758238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.001 [2024-07-13 21:09:11.758315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.001 [2024-07-13 21:09:11.758403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.001 [2024-07-13 21:09:11.758478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.001 [2024-07-13 21:09:11.758513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.001 [2024-07-13 21:09:11.758662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.001 [2024-07-13 21:09:11.758743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.001 [2024-07-13 21:09:11.758805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.001 [2024-07-13 21:09:11.758874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.001 [2024-07-13 21:09:11.758967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.001 [2024-07-13 21:09:11.758988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:58.001 [2024-07-13 21:09:11.759003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.001 [2024-07-13 21:09:11.759014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.001 [2024-07-13 21:09:11.759062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.001 [2024-07-13 21:09:11.759079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.001 [2024-07-13 21:09:11.759094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.001 [2024-07-13 21:09:11.759106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.001 [2024-07-13 21:09:11.759162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.001 [2024-07-13 21:09:11.759177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.001 [2024-07-13 21:09:11.759191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.001 [2024-07-13 21:09:11.759201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.001 [2024-07-13 21:09:11.759368] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 284.053 ms, result 0 00:17:58.936 21:09:12 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:58.936 21:09:12 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:58.936 [2024-07-13 21:09:12.818408] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:58.936 [2024-07-13 21:09:12.818564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72972 ] 00:17:59.205 [2024-07-13 21:09:12.987163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.475 [2024-07-13 21:09:13.146672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.733 [2024-07-13 21:09:13.414385] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:59.733 [2024-07-13 21:09:13.414473] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:59.733 [2024-07-13 21:09:13.567357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.733 [2024-07-13 21:09:13.567410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:59.733 [2024-07-13 21:09:13.567446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:59.733 [2024-07-13 21:09:13.567460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.733 [2024-07-13 21:09:13.570723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.733 [2024-07-13 21:09:13.570762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:59.733 [2024-07-13 21:09:13.570809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.237 ms 00:17:59.733 [2024-07-13 21:09:13.570824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.733 [2024-07-13 21:09:13.571043] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:59.733 [2024-07-13 21:09:13.572036] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:59.733 [2024-07-13 21:09:13.572077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.733 [2024-07-13 21:09:13.572137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:59.733 [2024-07-13 21:09:13.572151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:17:59.733 [2024-07-13 21:09:13.572162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.733 [2024-07-13 21:09:13.573360] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:59.733 [2024-07-13 21:09:13.588256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.733 [2024-07-13 21:09:13.588299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:59.733 [2024-07-13 21:09:13.588318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.897 ms 00:17:59.733 [2024-07-13 21:09:13.588329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.733 [2024-07-13 21:09:13.588457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.733 [2024-07-13 21:09:13.588478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:59.733 [2024-07-13 21:09:13.588508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:59.734 [2024-07-13 21:09:13.588519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.734 [2024-07-13 21:09:13.592903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.734 [2024-07-13 21:09:13.592937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:59.734 [2024-07-13 21:09:13.592967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.322 ms 00:17:59.734 [2024-07-13 21:09:13.592977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.734 [2024-07-13 21:09:13.593096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.734 [2024-07-13 21:09:13.593118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:59.734 [2024-07-13 21:09:13.593130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:59.734 [2024-07-13 21:09:13.593139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.734 [2024-07-13 21:09:13.593174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.734 [2024-07-13 21:09:13.593188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:59.734 [2024-07-13 21:09:13.593200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:59.734 [2024-07-13 21:09:13.593209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.734 [2024-07-13 21:09:13.593240] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:59.734 [2024-07-13 21:09:13.597292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.734 [2024-07-13 21:09:13.597327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:59.734 [2024-07-13 21:09:13.597357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.064 ms 00:17:59.734 [2024-07-13 21:09:13.597367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.734 [2024-07-13 21:09:13.597427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.734 [2024-07-13 21:09:13.597448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:59.734 [2024-07-13 21:09:13.597459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:59.734 [2024-07-13 21:09:13.597469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.734 [2024-07-13 21:09:13.597492] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:59.734 [2024-07-13 21:09:13.597516] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:59.734 [2024-07-13 21:09:13.597553] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:59.734 [2024-07-13 21:09:13.597571] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:59.734 [2024-07-13 21:09:13.597647] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:59.734 [2024-07-13 21:09:13.597661] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:59.734 [2024-07-13 21:09:13.597674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:59.734 [2024-07-13 21:09:13.597687] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:59.734 [2024-07-13 21:09:13.597699] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:59.734 [2024-07-13 21:09:13.597709] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:59.734 [2024-07-13 21:09:13.597719] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:59.734 [2024-07-13 21:09:13.597728] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:59.734 [2024-07-13 21:09:13.597737] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:59.734 [2024-07-13 21:09:13.597747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.734 [2024-07-13 21:09:13.597761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:59.734 [2024-07-13 21:09:13.597771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:17:59.734 [2024-07-13 21:09:13.597797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.734 [2024-07-13 21:09:13.597905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.734 [2024-07-13 21:09:13.597923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:59.734 [2024-07-13 21:09:13.597935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:59.734 [2024-07-13 21:09:13.597945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.734 [2024-07-13 21:09:13.598029] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:59.734 [2024-07-13 21:09:13.598044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:59.734 [2024-07-13 21:09:13.598055] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:59.734 [2024-07-13 21:09:13.598072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.734 [2024-07-13 21:09:13.598082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:59.734 [2024-07-13 21:09:13.598093] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:59.734 [2024-07-13 21:09:13.598103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:59.734 [2024-07-13 21:09:13.598113] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:59.734 [2024-07-13 21:09:13.598123] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:59.734 [2024-07-13 21:09:13.598147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:59.734 [2024-07-13 21:09:13.598156] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:59.734 [2024-07-13 21:09:13.598165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:59.734 [2024-07-13 21:09:13.598174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:59.734 [2024-07-13 21:09:13.598183] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:59.734 [2024-07-13 21:09:13.598193] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:59.734 [2024-07-13 21:09:13.598203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.734 [2024-07-13 21:09:13.598213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:59.734 [2024-07-13 21:09:13.598222] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:59.734 [2024-07-13 21:09:13.598231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.734 [2024-07-13 21:09:13.598266] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:59.734 [2024-07-13 21:09:13.598276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:59.734 [2024-07-13 21:09:13.598285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:59.734 [2024-07-13 21:09:13.598294] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:59.734 [2024-07-13 21:09:13.598303] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:59.734 [2024-07-13 21:09:13.598312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:59.734 [2024-07-13 21:09:13.598321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:59.734 [2024-07-13 21:09:13.598330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:59.734 [2024-07-13 21:09:13.598339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:59.734 [2024-07-13 21:09:13.598347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:59.734 [2024-07-13 21:09:13.598356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:59.734 [2024-07-13 21:09:13.598364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:59.734 [2024-07-13 21:09:13.598373] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:59.734 [2024-07-13 21:09:13.598382] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:59.734 [2024-07-13 21:09:13.598391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:59.734 [2024-07-13 21:09:13.598399] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:59.734 [2024-07-13 21:09:13.598408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:59.734 [2024-07-13 21:09:13.598417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:59.734 [2024-07-13 21:09:13.598425] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:59.734 [2024-07-13 21:09:13.598434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:59.734 [2024-07-13 21:09:13.598443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:59.734 [2024-07-13 21:09:13.598451] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:59.734 [2024-07-13 21:09:13.598461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:59.734 [2024-07-13 21:09:13.598470] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:59.734 [2024-07-13 21:09:13.598481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.734 [2024-07-13 21:09:13.598491] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:59.734 [2024-07-13 21:09:13.598500] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:59.734 [2024-07-13 21:09:13.598509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:59.734 [2024-07-13 21:09:13.598519] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:59.734 [2024-07-13 21:09:13.598527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:59.734 [2024-07-13 21:09:13.598537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:59.734 [2024-07-13 21:09:13.598563] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:59.734 [2024-07-13 21:09:13.598580] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:59.734 [2024-07-13 21:09:13.598591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:59.734 [2024-07-13 21:09:13.598601] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:59.734 [2024-07-13 21:09:13.598612] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:59.734 [2024-07-13 21:09:13.598622] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:59.734 [2024-07-13 21:09:13.598631] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:59.734 [2024-07-13 21:09:13.598641] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:59.734 [2024-07-13 21:09:13.598651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:59.734 [2024-07-13 21:09:13.598661] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:59.734 [2024-07-13 21:09:13.598671] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:59.734 [2024-07-13 21:09:13.598681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:59.734 [2024-07-13 21:09:13.598691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:59.734 [2024-07-13 21:09:13.598701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:59.735 [2024-07-13 21:09:13.598712] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:59.735 [2024-07-13 21:09:13.598721] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:59.735 [2024-07-13 21:09:13.598732] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:59.735 [2024-07-13 21:09:13.598743] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:59.735 [2024-07-13 21:09:13.598753] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:59.735 [2024-07-13 21:09:13.598764] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:59.735 [2024-07-13 21:09:13.598775] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:59.735 [2024-07-13 21:09:13.598786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.735 [2024-07-13 21:09:13.598801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:59.735 [2024-07-13 21:09:13.598812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:17:59.735 [2024-07-13 21:09:13.598822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.735 [2024-07-13 21:09:13.615658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.735 [2024-07-13 21:09:13.615872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.735 [2024-07-13 21:09:13.615985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.395 ms 00:17:59.735 [2024-07-13 21:09:13.616033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.735 [2024-07-13 21:09:13.616309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.735 [2024-07-13 21:09:13.616467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:59.735 [2024-07-13 21:09:13.616581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:59.735 [2024-07-13 21:09:13.616693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.666249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.666448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.993 [2024-07-13 21:09:13.666589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.482 ms 00:17:59.993 [2024-07-13 21:09:13.666658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.666784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.666837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.993 [2024-07-13 21:09:13.666933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:59.993 [2024-07-13 21:09:13.667052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.667566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.667710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.993 [2024-07-13 21:09:13.667810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:17:59.993 [2024-07-13 21:09:13.667949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.668143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.668164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.993 [2024-07-13 21:09:13.668177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:17:59.993 [2024-07-13 21:09:13.668189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.683648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.683684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.993 [2024-07-13 21:09:13.683716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.428 ms 00:17:59.993 [2024-07-13 21:09:13.683726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.698172] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:59.993 [2024-07-13 21:09:13.698229] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:59.993 [2024-07-13 21:09:13.698277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.698288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:59.993 [2024-07-13 21:09:13.698299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.412 ms 00:17:59.993 [2024-07-13 21:09:13.698309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.723884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.723920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:59.993 [2024-07-13 21:09:13.723951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.493 ms 00:17:59.993 [2024-07-13 21:09:13.723968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.737700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.737737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:59.993 [2024-07-13 21:09:13.737767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.651 ms 00:17:59.993 [2024-07-13 21:09:13.737776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.751435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.751480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:59.993 [2024-07-13 21:09:13.751510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.530 ms 00:17:59.993 [2024-07-13 21:09:13.751519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.752000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.752029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:59.993 [2024-07-13 21:09:13.752043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:17:59.993 [2024-07-13 21:09:13.752053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.821029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.821081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:59.993 [2024-07-13 21:09:13.821115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.943 ms 00:17:59.993 [2024-07-13 21:09:13.821126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.832972] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:59.993 [2024-07-13 21:09:13.845877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.845938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:59.993 [2024-07-13 21:09:13.845988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.610 ms 00:17:59.993 [2024-07-13 21:09:13.846015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.846139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.846157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:59.993 [2024-07-13 21:09:13.846170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:59.993 [2024-07-13 21:09:13.846180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.846244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.846281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:59.993 [2024-07-13 21:09:13.846293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:59.993 [2024-07-13 21:09:13.846303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.848641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.848787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:59.993 [2024-07-13 21:09:13.848923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.306 ms 00:17:59.993 [2024-07-13 21:09:13.848975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.849115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.849171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:59.993 [2024-07-13 21:09:13.849211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:59.993 [2024-07-13 21:09:13.849313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.849408] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:59.993 [2024-07-13 21:09:13.849461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.849567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:59.993 [2024-07-13 21:09:13.849612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:59.993 [2024-07-13 21:09:13.849711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.881454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.881647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:59.993 [2024-07-13 21:09:13.881799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.664 ms 00:17:59.993 [2024-07-13 21:09:13.881824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.881982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.993 [2024-07-13 21:09:13.882003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:59.993 [2024-07-13 21:09:13.882016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:59.993 [2024-07-13 21:09:13.882028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.993 [2024-07-13 21:09:13.882999] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:59.993 [2024-07-13 21:09:13.887129] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 315.266 ms, result 0 00:17:59.993 [2024-07-13 21:09:13.888048] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:59.993 [2024-07-13 21:09:13.904452] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:11.759  Copying: 25/256 [MB] (25 MBps) Copying: 49/256 [MB] (23 MBps) Copying: 72/256 [MB] (22 MBps) Copying: 94/256 [MB] (22 MBps) Copying: 116/256 [MB] (21 MBps) Copying: 137/256 [MB] (20 MBps) Copying: 158/256 [MB] (21 MBps) Copying: 178/256 [MB] (20 MBps) Copying: 200/256 [MB] (21 MBps) Copying: 221/256 [MB] (21 MBps) Copying: 242/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-13 21:09:25.505044] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:11.759 [2024-07-13 21:09:25.518377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.759 [2024-07-13 21:09:25.518435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:11.759 [2024-07-13 21:09:25.518470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:11.759 [2024-07-13 21:09:25.518505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.759 [2024-07-13 21:09:25.518555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:11.759 [2024-07-13 21:09:25.522057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.759 [2024-07-13 21:09:25.522089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:11.759 [2024-07-13 21:09:25.522104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.480 ms 00:18:11.759 [2024-07-13 21:09:25.522115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.759 [2024-07-13 21:09:25.522482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.759 [2024-07-13 21:09:25.522515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:11.759 [2024-07-13 21:09:25.522530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:18:11.759 [2024-07-13 21:09:25.522542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.759 [2024-07-13 21:09:25.526523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.759 [2024-07-13 21:09:25.526564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:11.759 [2024-07-13 21:09:25.526594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.958 ms 00:18:11.759 [2024-07-13 21:09:25.526604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.759 [2024-07-13 21:09:25.534153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.759 [2024-07-13 21:09:25.534186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:11.759 [2024-07-13 21:09:25.534217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.513 ms 00:18:11.759 [2024-07-13 21:09:25.534227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.759 [2024-07-13 21:09:25.564305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.759 [2024-07-13 21:09:25.564355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:11.759 [2024-07-13 21:09:25.564388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.984 ms 00:18:11.759 [2024-07-13 21:09:25.564400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.759 [2024-07-13 21:09:25.580640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.759 [2024-07-13 21:09:25.580676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:11.759 [2024-07-13 21:09:25.580720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.122 ms 00:18:11.759 [2024-07-13 21:09:25.580730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.759 [2024-07-13 21:09:25.580982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.759 [2024-07-13 21:09:25.581003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:11.759 [2024-07-13 21:09:25.581016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:18:11.759 [2024-07-13 21:09:25.581026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.759 [2024-07-13 21:09:25.609332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.759 [2024-07-13 21:09:25.609369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:11.759 [2024-07-13 21:09:25.609418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.284 ms 00:18:11.759 [2024-07-13 21:09:25.609428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.759 [2024-07-13 21:09:25.637592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.759 [2024-07-13 21:09:25.637627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:11.759 [2024-07-13 21:09:25.637657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.108 ms 00:18:11.759 [2024-07-13 21:09:25.637666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.759 [2024-07-13 21:09:25.665040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.759 [2024-07-13 21:09:25.665075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:11.759 [2024-07-13 21:09:25.665106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.320 ms 00:18:11.759 [2024-07-13 21:09:25.665116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.019 [2024-07-13 21:09:25.694307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.019 [2024-07-13 21:09:25.694345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:12.019 [2024-07-13 21:09:25.694360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.101 ms 00:18:12.019 [2024-07-13 21:09:25.694370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.019 [2024-07-13 21:09:25.694428] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:12.019 [2024-07-13 21:09:25.694449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.694994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.695007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.695024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.695034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.695044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.695055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.695065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.695075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.695099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.695109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:12.019 [2024-07-13 21:09:25.695118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:12.020 [2024-07-13 21:09:25.695587] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:12.020 [2024-07-13 21:09:25.695634] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f37e351e-d87e-4dea-9055-a9b6d8866d11 00:18:12.020 [2024-07-13 21:09:25.695645] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:12.020 [2024-07-13 21:09:25.695654] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:12.020 [2024-07-13 21:09:25.695664] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:12.020 [2024-07-13 21:09:25.695674] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:12.020 [2024-07-13 21:09:25.695684] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:12.020 [2024-07-13 21:09:25.695694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:12.020 [2024-07-13 21:09:25.695704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:12.020 [2024-07-13 21:09:25.695713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:12.020 [2024-07-13 21:09:25.695722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:12.020 [2024-07-13 21:09:25.695734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.020 [2024-07-13 21:09:25.695764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:12.020 [2024-07-13 21:09:25.695776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:18:12.020 [2024-07-13 21:09:25.695785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.020 [2024-07-13 21:09:25.711598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.020 [2024-07-13 21:09:25.711632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:12.020 [2024-07-13 21:09:25.711663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.786 ms 00:18:12.020 [2024-07-13 21:09:25.711673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.020 [2024-07-13 21:09:25.711971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.020 [2024-07-13 21:09:25.711990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:12.020 [2024-07-13 21:09:25.712002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:18:12.020 [2024-07-13 21:09:25.712011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.020 [2024-07-13 21:09:25.757198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.020 [2024-07-13 21:09:25.757270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.020 [2024-07-13 21:09:25.757318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.020 [2024-07-13 21:09:25.757328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.020 [2024-07-13 21:09:25.757429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.020 [2024-07-13 21:09:25.757444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.020 [2024-07-13 21:09:25.757455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.020 [2024-07-13 21:09:25.757464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.020 [2024-07-13 21:09:25.757520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.020 [2024-07-13 21:09:25.757536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.020 [2024-07-13 21:09:25.757547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.020 [2024-07-13 21:09:25.757557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.020 [2024-07-13 21:09:25.757579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.020 [2024-07-13 21:09:25.757628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.020 [2024-07-13 21:09:25.757638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.020 [2024-07-13 21:09:25.757648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.020 [2024-07-13 21:09:25.847113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.020 [2024-07-13 21:09:25.847166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.020 [2024-07-13 21:09:25.847199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.020 [2024-07-13 21:09:25.847209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.020 [2024-07-13 21:09:25.884559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.020 [2024-07-13 21:09:25.884629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.020 [2024-07-13 21:09:25.884661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.020 [2024-07-13 21:09:25.884672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.020 [2024-07-13 21:09:25.884754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.020 [2024-07-13 21:09:25.884770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.020 [2024-07-13 21:09:25.884781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.020 [2024-07-13 21:09:25.884790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.020 [2024-07-13 21:09:25.884821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.020 [2024-07-13 21:09:25.884832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.021 [2024-07-13 21:09:25.884915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.021 [2024-07-13 21:09:25.884926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.021 [2024-07-13 21:09:25.885074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.021 [2024-07-13 21:09:25.885092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.021 [2024-07-13 21:09:25.885103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.021 [2024-07-13 21:09:25.885112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.021 [2024-07-13 21:09:25.885160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.021 [2024-07-13 21:09:25.885176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:12.021 [2024-07-13 21:09:25.885187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.021 [2024-07-13 21:09:25.885210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.021 [2024-07-13 21:09:25.885302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.021 [2024-07-13 21:09:25.885316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.021 [2024-07-13 21:09:25.885328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.021 [2024-07-13 21:09:25.885339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.021 [2024-07-13 21:09:25.885392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.021 [2024-07-13 21:09:25.885408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.021 [2024-07-13 21:09:25.885433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.021 [2024-07-13 21:09:25.885451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.021 [2024-07-13 21:09:25.885641] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.284 ms, result 0 00:18:12.957 00:18:12.957 00:18:12.957 21:09:26 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:12.957 21:09:26 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:13.525 21:09:27 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:13.784 [2024-07-13 21:09:27.492509] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:13.784 [2024-07-13 21:09:27.492681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73124 ] 00:18:13.784 [2024-07-13 21:09:27.662379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.043 [2024-07-13 21:09:27.832651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.312 [2024-07-13 21:09:28.110862] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:14.312 [2024-07-13 21:09:28.110972] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:14.618 [2024-07-13 21:09:28.264059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.618 [2024-07-13 21:09:28.264131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:14.618 [2024-07-13 21:09:28.264170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:14.618 [2024-07-13 21:09:28.264188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.618 [2024-07-13 21:09:28.267196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.618 [2024-07-13 21:09:28.267233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:14.618 [2024-07-13 21:09:28.267265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.978 ms 00:18:14.618 [2024-07-13 21:09:28.267280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.618 [2024-07-13 21:09:28.267401] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:14.618 [2024-07-13 21:09:28.268373] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:14.618 [2024-07-13 21:09:28.268429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.618 [2024-07-13 21:09:28.268448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:14.618 [2024-07-13 21:09:28.268460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:18:14.618 [2024-07-13 21:09:28.268471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.618 [2024-07-13 21:09:28.269711] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:14.618 [2024-07-13 21:09:28.284246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.618 [2024-07-13 21:09:28.284287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:14.618 [2024-07-13 21:09:28.284321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.537 ms 00:18:14.618 [2024-07-13 21:09:28.284332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.618 [2024-07-13 21:09:28.284469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.618 [2024-07-13 21:09:28.284488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:14.618 [2024-07-13 21:09:28.284503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:14.618 [2024-07-13 21:09:28.284513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.618 [2024-07-13 21:09:28.289050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.618 [2024-07-13 21:09:28.289085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:14.618 [2024-07-13 21:09:28.289115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.472 ms 00:18:14.618 [2024-07-13 21:09:28.289125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.618 [2024-07-13 21:09:28.289241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.618 [2024-07-13 21:09:28.289262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:14.618 [2024-07-13 21:09:28.289274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:14.618 [2024-07-13 21:09:28.289283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.618 [2024-07-13 21:09:28.289317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.618 [2024-07-13 21:09:28.289330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:14.618 [2024-07-13 21:09:28.289341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:14.618 [2024-07-13 21:09:28.289351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.618 [2024-07-13 21:09:28.289380] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:14.618 [2024-07-13 21:09:28.293392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.618 [2024-07-13 21:09:28.293427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:14.618 [2024-07-13 21:09:28.293456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.022 ms 00:18:14.618 [2024-07-13 21:09:28.293466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.618 [2024-07-13 21:09:28.293526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.618 [2024-07-13 21:09:28.293547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:14.618 [2024-07-13 21:09:28.293558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:14.618 [2024-07-13 21:09:28.293567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.618 [2024-07-13 21:09:28.293606] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:14.618 [2024-07-13 21:09:28.293633] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:14.618 [2024-07-13 21:09:28.293670] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:14.618 [2024-07-13 21:09:28.293698] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:14.618 [2024-07-13 21:09:28.293773] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:14.618 [2024-07-13 21:09:28.293788] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:14.618 [2024-07-13 21:09:28.293801] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:14.618 [2024-07-13 21:09:28.293815] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:14.618 [2024-07-13 21:09:28.293826] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:14.618 [2024-07-13 21:09:28.293837] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:14.618 [2024-07-13 21:09:28.293846] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:14.618 [2024-07-13 21:09:28.293856] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:14.618 [2024-07-13 21:09:28.293901] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:14.618 [2024-07-13 21:09:28.293916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.618 [2024-07-13 21:09:28.293931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:14.618 [2024-07-13 21:09:28.293942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:18:14.618 [2024-07-13 21:09:28.293953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.618 [2024-07-13 21:09:28.294041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.618 [2024-07-13 21:09:28.294055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:14.618 [2024-07-13 21:09:28.294066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:14.618 [2024-07-13 21:09:28.294077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.618 [2024-07-13 21:09:28.294151] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:14.618 [2024-07-13 21:09:28.294166] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:14.618 [2024-07-13 21:09:28.294177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:14.618 [2024-07-13 21:09:28.294191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.618 [2024-07-13 21:09:28.294202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:14.618 [2024-07-13 21:09:28.294212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:14.618 [2024-07-13 21:09:28.294221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:14.618 [2024-07-13 21:09:28.294232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:14.618 [2024-07-13 21:09:28.294256] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:14.618 [2024-07-13 21:09:28.294265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:14.618 [2024-07-13 21:09:28.294274] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:14.619 [2024-07-13 21:09:28.294283] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:14.619 [2024-07-13 21:09:28.294292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:14.619 [2024-07-13 21:09:28.294304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:14.619 [2024-07-13 21:09:28.294313] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:14.619 [2024-07-13 21:09:28.294322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.619 [2024-07-13 21:09:28.294331] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:14.619 [2024-07-13 21:09:28.294340] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:14.619 [2024-07-13 21:09:28.294349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.619 [2024-07-13 21:09:28.294368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:14.619 [2024-07-13 21:09:28.294378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:14.619 [2024-07-13 21:09:28.294387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:14.619 [2024-07-13 21:09:28.294397] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:14.619 [2024-07-13 21:09:28.294406] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:14.619 [2024-07-13 21:09:28.294415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:14.619 [2024-07-13 21:09:28.294424] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:14.619 [2024-07-13 21:09:28.294433] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:14.619 [2024-07-13 21:09:28.294442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:14.619 [2024-07-13 21:09:28.294450] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:14.619 [2024-07-13 21:09:28.294459] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:14.619 [2024-07-13 21:09:28.294468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:14.619 [2024-07-13 21:09:28.294477] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:14.619 [2024-07-13 21:09:28.294486] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:14.619 [2024-07-13 21:09:28.294495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:14.619 [2024-07-13 21:09:28.294504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:14.619 [2024-07-13 21:09:28.294513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:14.619 [2024-07-13 21:09:28.294522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:14.619 [2024-07-13 21:09:28.294531] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:14.619 [2024-07-13 21:09:28.294540] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:14.619 [2024-07-13 21:09:28.294548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:14.619 [2024-07-13 21:09:28.294557] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:14.619 [2024-07-13 21:09:28.294567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:14.619 [2024-07-13 21:09:28.294576] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:14.619 [2024-07-13 21:09:28.294603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.619 [2024-07-13 21:09:28.294613] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:14.619 [2024-07-13 21:09:28.294623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:14.619 [2024-07-13 21:09:28.294633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:14.619 [2024-07-13 21:09:28.294642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:14.619 [2024-07-13 21:09:28.294651] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:14.619 [2024-07-13 21:09:28.294661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:14.619 [2024-07-13 21:09:28.294671] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:14.619 [2024-07-13 21:09:28.294689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:14.619 [2024-07-13 21:09:28.294701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:14.619 [2024-07-13 21:09:28.294711] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:14.619 [2024-07-13 21:09:28.294721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:14.619 [2024-07-13 21:09:28.294731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:14.619 [2024-07-13 21:09:28.294741] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:14.619 [2024-07-13 21:09:28.294750] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:14.619 [2024-07-13 21:09:28.294760] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:14.619 [2024-07-13 21:09:28.294770] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:14.619 [2024-07-13 21:09:28.294780] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:14.619 [2024-07-13 21:09:28.294790] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:14.619 [2024-07-13 21:09:28.294800] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:14.619 [2024-07-13 21:09:28.294811] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:14.619 [2024-07-13 21:09:28.294821] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:14.619 [2024-07-13 21:09:28.294831] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:14.619 [2024-07-13 21:09:28.294842] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:14.619 [2024-07-13 21:09:28.294869] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:14.619 [2024-07-13 21:09:28.294879] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:14.619 [2024-07-13 21:09:28.294889] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:14.619 [2024-07-13 21:09:28.294900] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:14.619 [2024-07-13 21:09:28.294931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.619 [2024-07-13 21:09:28.294979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:14.619 [2024-07-13 21:09:28.294990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:18:14.619 [2024-07-13 21:09:28.295000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.619 [2024-07-13 21:09:28.311396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.619 [2024-07-13 21:09:28.311438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:14.619 [2024-07-13 21:09:28.311470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.313 ms 00:18:14.619 [2024-07-13 21:09:28.311481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.619 [2024-07-13 21:09:28.311625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.619 [2024-07-13 21:09:28.311643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:14.619 [2024-07-13 21:09:28.311654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:14.619 [2024-07-13 21:09:28.311664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.619 [2024-07-13 21:09:28.358878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.619 [2024-07-13 21:09:28.358931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:14.619 [2024-07-13 21:09:28.358984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.169 ms 00:18:14.619 [2024-07-13 21:09:28.358995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.619 [2024-07-13 21:09:28.359091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.619 [2024-07-13 21:09:28.359120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:14.619 [2024-07-13 21:09:28.359132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:14.619 [2024-07-13 21:09:28.359142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.619 [2024-07-13 21:09:28.359509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.619 [2024-07-13 21:09:28.359533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:14.619 [2024-07-13 21:09:28.359546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:18:14.619 [2024-07-13 21:09:28.359558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.619 [2024-07-13 21:09:28.359714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.619 [2024-07-13 21:09:28.359732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:14.619 [2024-07-13 21:09:28.359744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:18:14.619 [2024-07-13 21:09:28.359754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.619 [2024-07-13 21:09:28.375340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.619 [2024-07-13 21:09:28.375376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:14.619 [2024-07-13 21:09:28.375408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.558 ms 00:18:14.619 [2024-07-13 21:09:28.375418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.619 [2024-07-13 21:09:28.390180] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:14.619 [2024-07-13 21:09:28.390219] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:14.619 [2024-07-13 21:09:28.390252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.619 [2024-07-13 21:09:28.390262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:14.619 [2024-07-13 21:09:28.390274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.708 ms 00:18:14.619 [2024-07-13 21:09:28.390284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.619 [2024-07-13 21:09:28.416642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.619 [2024-07-13 21:09:28.416680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:14.619 [2024-07-13 21:09:28.416712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.277 ms 00:18:14.619 [2024-07-13 21:09:28.416729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.619 [2024-07-13 21:09:28.430724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.619 [2024-07-13 21:09:28.430761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:14.619 [2024-07-13 21:09:28.430792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.929 ms 00:18:14.620 [2024-07-13 21:09:28.430802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.620 [2024-07-13 21:09:28.444747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.620 [2024-07-13 21:09:28.444793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:14.620 [2024-07-13 21:09:28.444824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.832 ms 00:18:14.620 [2024-07-13 21:09:28.444834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.620 [2024-07-13 21:09:28.445364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.620 [2024-07-13 21:09:28.445394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:14.620 [2024-07-13 21:09:28.445409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:18:14.620 [2024-07-13 21:09:28.445420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.620 [2024-07-13 21:09:28.517409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.620 [2024-07-13 21:09:28.517462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:14.620 [2024-07-13 21:09:28.517496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.957 ms 00:18:14.620 [2024-07-13 21:09:28.517506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.620 [2024-07-13 21:09:28.529343] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:14.890 [2024-07-13 21:09:28.545297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.545359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:14.890 [2024-07-13 21:09:28.545394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.627 ms 00:18:14.890 [2024-07-13 21:09:28.545406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.890 [2024-07-13 21:09:28.545546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.545580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:14.890 [2024-07-13 21:09:28.545609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:14.890 [2024-07-13 21:09:28.545620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.890 [2024-07-13 21:09:28.545685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.545705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:14.890 [2024-07-13 21:09:28.545717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:14.890 [2024-07-13 21:09:28.545728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.890 [2024-07-13 21:09:28.547818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.547914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:14.890 [2024-07-13 21:09:28.547975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.061 ms 00:18:14.890 [2024-07-13 21:09:28.547995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.890 [2024-07-13 21:09:28.548046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.548068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:14.890 [2024-07-13 21:09:28.548089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:14.890 [2024-07-13 21:09:28.548143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.890 [2024-07-13 21:09:28.548203] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:14.890 [2024-07-13 21:09:28.548228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.548242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:14.890 [2024-07-13 21:09:28.548263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:14.890 [2024-07-13 21:09:28.548281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.890 [2024-07-13 21:09:28.579966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.580178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:14.890 [2024-07-13 21:09:28.580319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.625 ms 00:18:14.890 [2024-07-13 21:09:28.580373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.890 [2024-07-13 21:09:28.580573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.580632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:14.890 [2024-07-13 21:09:28.580731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:14.890 [2024-07-13 21:09:28.580777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.890 [2024-07-13 21:09:28.582050] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:14.890 [2024-07-13 21:09:28.585972] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 317.548 ms, result 0 00:18:14.890 [2024-07-13 21:09:28.587020] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:14.890 [2024-07-13 21:09:28.602512] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:14.890  Copying: 4096/4096 [kB] (average 22 MBps)[2024-07-13 21:09:28.785074] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:14.890 [2024-07-13 21:09:28.795675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.795713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:14.890 [2024-07-13 21:09:28.795746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:14.890 [2024-07-13 21:09:28.795763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.890 [2024-07-13 21:09:28.795791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:14.890 [2024-07-13 21:09:28.798908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.798953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:14.890 [2024-07-13 21:09:28.798983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.098 ms 00:18:14.890 [2024-07-13 21:09:28.799008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.890 [2024-07-13 21:09:28.800798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.800867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:14.890 [2024-07-13 21:09:28.800886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.764 ms 00:18:14.890 [2024-07-13 21:09:28.800897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.890 [2024-07-13 21:09:28.804739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.804782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:14.890 [2024-07-13 21:09:28.804814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.819 ms 00:18:14.890 [2024-07-13 21:09:28.804825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.890 [2024-07-13 21:09:28.812153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.890 [2024-07-13 21:09:28.812195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:14.890 [2024-07-13 21:09:28.812213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.250 ms 00:18:14.890 [2024-07-13 21:09:28.812231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.150 [2024-07-13 21:09:28.842187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.150 [2024-07-13 21:09:28.842225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:15.150 [2024-07-13 21:09:28.842257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.867 ms 00:18:15.150 [2024-07-13 21:09:28.842267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.150 [2024-07-13 21:09:28.859544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.150 [2024-07-13 21:09:28.859581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:15.150 [2024-07-13 21:09:28.859633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.201 ms 00:18:15.150 [2024-07-13 21:09:28.859644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.150 [2024-07-13 21:09:28.859786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.150 [2024-07-13 21:09:28.859804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:15.150 [2024-07-13 21:09:28.859816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:15.150 [2024-07-13 21:09:28.859827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.150 [2024-07-13 21:09:28.889637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.150 [2024-07-13 21:09:28.889677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:15.150 [2024-07-13 21:09:28.889722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.789 ms 00:18:15.150 [2024-07-13 21:09:28.889732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.150 [2024-07-13 21:09:28.918924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.150 [2024-07-13 21:09:28.918975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:15.150 [2024-07-13 21:09:28.919006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.115 ms 00:18:15.150 [2024-07-13 21:09:28.919015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.150 [2024-07-13 21:09:28.947324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.150 [2024-07-13 21:09:28.947375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:15.150 [2024-07-13 21:09:28.947423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.237 ms 00:18:15.150 [2024-07-13 21:09:28.947434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.150 [2024-07-13 21:09:28.977391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.150 [2024-07-13 21:09:28.977426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:15.150 [2024-07-13 21:09:28.977457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.852 ms 00:18:15.150 [2024-07-13 21:09:28.977467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.150 [2024-07-13 21:09:28.977538] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:15.150 [2024-07-13 21:09:28.977561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.977977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.978003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.978030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.978042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.978053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:15.150 [2024-07-13 21:09:28.978065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:15.151 [2024-07-13 21:09:28.978922] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:15.151 [2024-07-13 21:09:28.978948] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f37e351e-d87e-4dea-9055-a9b6d8866d11 00:18:15.151 [2024-07-13 21:09:28.978961] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:15.151 [2024-07-13 21:09:28.979001] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:15.151 [2024-07-13 21:09:28.979014] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:15.151 [2024-07-13 21:09:28.979026] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:15.151 [2024-07-13 21:09:28.979037] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:15.151 [2024-07-13 21:09:28.979048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:15.151 [2024-07-13 21:09:28.979059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:15.151 [2024-07-13 21:09:28.979069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:15.151 [2024-07-13 21:09:28.979079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:15.151 [2024-07-13 21:09:28.979090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.151 [2024-07-13 21:09:28.979107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:15.151 [2024-07-13 21:09:28.979119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:18:15.151 [2024-07-13 21:09:28.979130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.151 [2024-07-13 21:09:28.995082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.151 [2024-07-13 21:09:28.995115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:15.151 [2024-07-13 21:09:28.995145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.925 ms 00:18:15.151 [2024-07-13 21:09:28.995155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.151 [2024-07-13 21:09:28.995394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.151 [2024-07-13 21:09:28.995411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:15.151 [2024-07-13 21:09:28.995423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:18:15.151 [2024-07-13 21:09:28.995434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.151 [2024-07-13 21:09:29.043647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.151 [2024-07-13 21:09:29.043693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:15.151 [2024-07-13 21:09:29.043725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.151 [2024-07-13 21:09:29.043737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.151 [2024-07-13 21:09:29.043842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.151 [2024-07-13 21:09:29.043898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:15.151 [2024-07-13 21:09:29.043912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.151 [2024-07-13 21:09:29.043922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.151 [2024-07-13 21:09:29.043995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.151 [2024-07-13 21:09:29.044012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:15.151 [2024-07-13 21:09:29.044024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.151 [2024-07-13 21:09:29.044034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.151 [2024-07-13 21:09:29.044063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.152 [2024-07-13 21:09:29.044077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:15.152 [2024-07-13 21:09:29.044088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.152 [2024-07-13 21:09:29.044107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.411 [2024-07-13 21:09:29.130447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.411 [2024-07-13 21:09:29.130505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:15.411 [2024-07-13 21:09:29.130538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.411 [2024-07-13 21:09:29.130548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.411 [2024-07-13 21:09:29.168090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.411 [2024-07-13 21:09:29.168169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:15.411 [2024-07-13 21:09:29.168204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.411 [2024-07-13 21:09:29.168216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.411 [2024-07-13 21:09:29.168295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.411 [2024-07-13 21:09:29.168314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:15.411 [2024-07-13 21:09:29.168327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.411 [2024-07-13 21:09:29.168339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.411 [2024-07-13 21:09:29.168375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.411 [2024-07-13 21:09:29.168397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:15.411 [2024-07-13 21:09:29.168410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.411 [2024-07-13 21:09:29.168432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.411 [2024-07-13 21:09:29.168563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.411 [2024-07-13 21:09:29.168580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:15.411 [2024-07-13 21:09:29.168603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.411 [2024-07-13 21:09:29.168625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.411 [2024-07-13 21:09:29.168679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.411 [2024-07-13 21:09:29.168696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:15.411 [2024-07-13 21:09:29.168715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.411 [2024-07-13 21:09:29.168726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.411 [2024-07-13 21:09:29.168772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.411 [2024-07-13 21:09:29.168786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:15.411 [2024-07-13 21:09:29.168799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.411 [2024-07-13 21:09:29.168810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.411 [2024-07-13 21:09:29.168862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.411 [2024-07-13 21:09:29.168878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:15.411 [2024-07-13 21:09:29.168895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.411 [2024-07-13 21:09:29.168981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.411 [2024-07-13 21:09:29.169172] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.499 ms, result 0 00:18:16.347 00:18:16.347 00:18:16.347 21:09:30 -- ftl/trim.sh@93 -- # svcpid=73155 00:18:16.347 21:09:30 -- ftl/trim.sh@94 -- # waitforlisten 73155 00:18:16.347 21:09:30 -- common/autotest_common.sh@819 -- # '[' -z 73155 ']' 00:18:16.347 21:09:30 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:16.347 21:09:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.347 21:09:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:16.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.347 21:09:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.347 21:09:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:16.347 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:18:16.606 [2024-07-13 21:09:30.291720] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:16.606 [2024-07-13 21:09:30.291929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73155 ] 00:18:16.606 [2024-07-13 21:09:30.460551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.865 [2024-07-13 21:09:30.629881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:16.865 [2024-07-13 21:09:30.630143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.240 21:09:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:18.240 21:09:31 -- common/autotest_common.sh@852 -- # return 0 00:18:18.240 21:09:31 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:18.240 [2024-07-13 21:09:32.136925] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:18.240 [2024-07-13 21:09:32.137006] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:18.500 [2024-07-13 21:09:32.306939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.500 [2024-07-13 21:09:32.306993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:18.500 [2024-07-13 21:09:32.307030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:18.500 [2024-07-13 21:09:32.307042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.500 [2024-07-13 21:09:32.310094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.500 [2024-07-13 21:09:32.310147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:18.500 [2024-07-13 21:09:32.310184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.013 ms 00:18:18.500 [2024-07-13 21:09:32.310196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.500 [2024-07-13 21:09:32.310369] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:18.500 [2024-07-13 21:09:32.311309] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:18.500 [2024-07-13 21:09:32.311353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.500 [2024-07-13 21:09:32.311367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:18.500 [2024-07-13 21:09:32.311381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:18:18.500 [2024-07-13 21:09:32.311393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.500 [2024-07-13 21:09:32.312715] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:18.500 [2024-07-13 21:09:32.328533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.500 [2024-07-13 21:09:32.328578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:18.500 [2024-07-13 21:09:32.328621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.823 ms 00:18:18.500 [2024-07-13 21:09:32.328635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.500 [2024-07-13 21:09:32.328736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.500 [2024-07-13 21:09:32.328759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:18.500 [2024-07-13 21:09:32.328771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:18.500 [2024-07-13 21:09:32.328784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.500 [2024-07-13 21:09:32.333324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.500 [2024-07-13 21:09:32.333364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:18.500 [2024-07-13 21:09:32.333394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.468 ms 00:18:18.500 [2024-07-13 21:09:32.333408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.500 [2024-07-13 21:09:32.333507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.500 [2024-07-13 21:09:32.333527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:18.500 [2024-07-13 21:09:32.333539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:18.500 [2024-07-13 21:09:32.333550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.500 [2024-07-13 21:09:32.333582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.500 [2024-07-13 21:09:32.333618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:18.500 [2024-07-13 21:09:32.333629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:18.500 [2024-07-13 21:09:32.333641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.500 [2024-07-13 21:09:32.333674] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:18.500 [2024-07-13 21:09:32.337683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.500 [2024-07-13 21:09:32.337716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:18.500 [2024-07-13 21:09:32.337748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.017 ms 00:18:18.500 [2024-07-13 21:09:32.337759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.500 [2024-07-13 21:09:32.337823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.500 [2024-07-13 21:09:32.337839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:18.500 [2024-07-13 21:09:32.337904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:18.500 [2024-07-13 21:09:32.337918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.500 [2024-07-13 21:09:32.337949] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:18.500 [2024-07-13 21:09:32.337977] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:18.500 [2024-07-13 21:09:32.338033] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:18.500 [2024-07-13 21:09:32.338052] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:18.500 [2024-07-13 21:09:32.338131] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:18.500 [2024-07-13 21:09:32.338178] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:18.500 [2024-07-13 21:09:32.338211] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:18.500 [2024-07-13 21:09:32.338241] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:18.501 [2024-07-13 21:09:32.338258] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:18.501 [2024-07-13 21:09:32.338271] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:18.501 [2024-07-13 21:09:32.338284] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:18.501 [2024-07-13 21:09:32.338294] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:18.501 [2024-07-13 21:09:32.338309] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:18.501 [2024-07-13 21:09:32.338320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.501 [2024-07-13 21:09:32.338333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:18.501 [2024-07-13 21:09:32.338345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:18:18.501 [2024-07-13 21:09:32.338358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.501 [2024-07-13 21:09:32.338432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.501 [2024-07-13 21:09:32.338453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:18.501 [2024-07-13 21:09:32.338468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:18.501 [2024-07-13 21:09:32.338481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.501 [2024-07-13 21:09:32.338567] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:18.501 [2024-07-13 21:09:32.338585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:18.501 [2024-07-13 21:09:32.338613] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:18.501 [2024-07-13 21:09:32.338627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.501 [2024-07-13 21:09:32.338639] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:18.501 [2024-07-13 21:09:32.338653] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:18.501 [2024-07-13 21:09:32.338664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:18.501 [2024-07-13 21:09:32.338679] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:18.501 [2024-07-13 21:09:32.338690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:18.501 [2024-07-13 21:09:32.338702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:18.501 [2024-07-13 21:09:32.338713] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:18.501 [2024-07-13 21:09:32.338725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:18.501 [2024-07-13 21:09:32.338735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:18.501 [2024-07-13 21:09:32.338748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:18.501 [2024-07-13 21:09:32.338760] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:18.501 [2024-07-13 21:09:32.338773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.501 [2024-07-13 21:09:32.338783] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:18.501 [2024-07-13 21:09:32.338795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:18.501 [2024-07-13 21:09:32.338806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.501 [2024-07-13 21:09:32.338818] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:18.501 [2024-07-13 21:09:32.338829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:18.501 [2024-07-13 21:09:32.338840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:18.501 [2024-07-13 21:09:32.338870] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:18.501 [2024-07-13 21:09:32.338890] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:18.501 [2024-07-13 21:09:32.338900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:18.501 [2024-07-13 21:09:32.338913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:18.501 [2024-07-13 21:09:32.338923] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:18.501 [2024-07-13 21:09:32.338935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:18.501 [2024-07-13 21:09:32.338946] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:18.501 [2024-07-13 21:09:32.338973] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:18.501 [2024-07-13 21:09:32.338996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:18.501 [2024-07-13 21:09:32.339009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:18.501 [2024-07-13 21:09:32.339019] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:18.501 [2024-07-13 21:09:32.339031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:18.501 [2024-07-13 21:09:32.339041] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:18.501 [2024-07-13 21:09:32.339052] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:18.501 [2024-07-13 21:09:32.339062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:18.501 [2024-07-13 21:09:32.339074] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:18.501 [2024-07-13 21:09:32.339084] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:18.501 [2024-07-13 21:09:32.339098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:18.501 [2024-07-13 21:09:32.339107] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:18.501 [2024-07-13 21:09:32.339120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:18.501 [2024-07-13 21:09:32.339131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:18.501 [2024-07-13 21:09:32.339146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.501 [2024-07-13 21:09:32.339158] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:18.501 [2024-07-13 21:09:32.339170] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:18.501 [2024-07-13 21:09:32.339182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:18.501 [2024-07-13 21:09:32.339194] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:18.501 [2024-07-13 21:09:32.339204] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:18.501 [2024-07-13 21:09:32.339216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:18.501 [2024-07-13 21:09:32.339228] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:18.501 [2024-07-13 21:09:32.339244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:18.501 [2024-07-13 21:09:32.339256] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:18.501 [2024-07-13 21:09:32.339270] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:18.501 [2024-07-13 21:09:32.339282] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:18.501 [2024-07-13 21:09:32.339298] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:18.501 [2024-07-13 21:09:32.339310] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:18.501 [2024-07-13 21:09:32.339323] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:18.501 [2024-07-13 21:09:32.339334] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:18.501 [2024-07-13 21:09:32.339347] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:18.501 [2024-07-13 21:09:32.339359] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:18.501 [2024-07-13 21:09:32.339374] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:18.501 [2024-07-13 21:09:32.339385] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:18.501 [2024-07-13 21:09:32.339399] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:18.501 [2024-07-13 21:09:32.339411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:18.501 [2024-07-13 21:09:32.339424] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:18.501 [2024-07-13 21:09:32.339437] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:18.501 [2024-07-13 21:09:32.339452] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:18.501 [2024-07-13 21:09:32.339464] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:18.501 [2024-07-13 21:09:32.339477] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:18.501 [2024-07-13 21:09:32.339489] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:18.501 [2024-07-13 21:09:32.339505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.501 [2024-07-13 21:09:32.339516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:18.501 [2024-07-13 21:09:32.339530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:18:18.501 [2024-07-13 21:09:32.339541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.501 [2024-07-13 21:09:32.356516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.501 [2024-07-13 21:09:32.356557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.501 [2024-07-13 21:09:32.356603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.875 ms 00:18:18.501 [2024-07-13 21:09:32.356614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.501 [2024-07-13 21:09:32.356756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.501 [2024-07-13 21:09:32.356775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:18.501 [2024-07-13 21:09:32.356789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:18.501 [2024-07-13 21:09:32.356800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.501 [2024-07-13 21:09:32.394506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.501 [2024-07-13 21:09:32.394555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.501 [2024-07-13 21:09:32.394607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.678 ms 00:18:18.501 [2024-07-13 21:09:32.394635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.501 [2024-07-13 21:09:32.394745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.501 [2024-07-13 21:09:32.394763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.501 [2024-07-13 21:09:32.394778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:18.501 [2024-07-13 21:09:32.394791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.501 [2024-07-13 21:09:32.395203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.501 [2024-07-13 21:09:32.395222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.501 [2024-07-13 21:09:32.395254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:18:18.502 [2024-07-13 21:09:32.395266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.502 [2024-07-13 21:09:32.395443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.502 [2024-07-13 21:09:32.395472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.502 [2024-07-13 21:09:32.395489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:18:18.502 [2024-07-13 21:09:32.395500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.502 [2024-07-13 21:09:32.413366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.502 [2024-07-13 21:09:32.413444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.502 [2024-07-13 21:09:32.413483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.831 ms 00:18:18.502 [2024-07-13 21:09:32.413495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.430462] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:18.762 [2024-07-13 21:09:32.430502] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:18.762 [2024-07-13 21:09:32.430537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.430549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:18.762 [2024-07-13 21:09:32.430563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.845 ms 00:18:18.762 [2024-07-13 21:09:32.430573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.458823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.458888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:18.762 [2024-07-13 21:09:32.458929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.161 ms 00:18:18.762 [2024-07-13 21:09:32.458941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.473901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.473937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:18.762 [2024-07-13 21:09:32.473986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.842 ms 00:18:18.762 [2024-07-13 21:09:32.473996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.488903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.488963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:18.762 [2024-07-13 21:09:32.488984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.825 ms 00:18:18.762 [2024-07-13 21:09:32.488994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.489416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.489442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:18.762 [2024-07-13 21:09:32.489458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:18:18.762 [2024-07-13 21:09:32.489469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.562833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.562925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:18.762 [2024-07-13 21:09:32.562962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.278 ms 00:18:18.762 [2024-07-13 21:09:32.562990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.574705] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:18.762 [2024-07-13 21:09:32.587324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.587400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:18.762 [2024-07-13 21:09:32.587419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.191 ms 00:18:18.762 [2024-07-13 21:09:32.587443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.587556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.587578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:18.762 [2024-07-13 21:09:32.587601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:18.762 [2024-07-13 21:09:32.587613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.587670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.587687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:18.762 [2024-07-13 21:09:32.587698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:18.762 [2024-07-13 21:09:32.587710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.589683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.589719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:18.762 [2024-07-13 21:09:32.589749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.946 ms 00:18:18.762 [2024-07-13 21:09:32.589761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.589795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.589814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:18.762 [2024-07-13 21:09:32.589826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:18.762 [2024-07-13 21:09:32.589841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.589941] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:18.762 [2024-07-13 21:09:32.589966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.589977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:18.762 [2024-07-13 21:09:32.589990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:18.762 [2024-07-13 21:09:32.590001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.618224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.618280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:18.762 [2024-07-13 21:09:32.618314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.191 ms 00:18:18.762 [2024-07-13 21:09:32.618325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.618453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.762 [2024-07-13 21:09:32.618472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:18.762 [2024-07-13 21:09:32.618485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:18.762 [2024-07-13 21:09:32.618496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.762 [2024-07-13 21:09:32.619522] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:18.762 [2024-07-13 21:09:32.623506] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.270 ms, result 0 00:18:18.762 [2024-07-13 21:09:32.624776] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:18.762 Some configs were skipped because the RPC state that can call them passed over. 00:18:18.762 21:09:32 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:19.021 [2024-07-13 21:09:32.934721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.021 [2024-07-13 21:09:32.935021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:19.021 [2024-07-13 21:09:32.935164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.416 ms 00:18:19.021 [2024-07-13 21:09:32.935353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.021 [2024-07-13 21:09:32.935450] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 31.142 ms, result 0 00:18:19.021 true 00:18:19.280 21:09:32 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:19.542 [2024-07-13 21:09:33.208393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.542 [2024-07-13 21:09:33.208690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:19.542 [2024-07-13 21:09:33.208859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.196 ms 00:18:19.542 [2024-07-13 21:09:33.208917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.542 [2024-07-13 21:09:33.209093] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 29.888 ms, result 0 00:18:19.542 true 00:18:19.542 21:09:33 -- ftl/trim.sh@102 -- # killprocess 73155 00:18:19.542 21:09:33 -- common/autotest_common.sh@926 -- # '[' -z 73155 ']' 00:18:19.542 21:09:33 -- common/autotest_common.sh@930 -- # kill -0 73155 00:18:19.542 21:09:33 -- common/autotest_common.sh@931 -- # uname 00:18:19.542 21:09:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:19.542 21:09:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73155 00:18:19.542 killing process with pid 73155 00:18:19.542 21:09:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:19.542 21:09:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:19.542 21:09:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73155' 00:18:19.542 21:09:33 -- common/autotest_common.sh@945 -- # kill 73155 00:18:19.542 21:09:33 -- common/autotest_common.sh@950 -- # wait 73155 00:18:20.481 [2024-07-13 21:09:34.101371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.481 [2024-07-13 21:09:34.101435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:20.481 [2024-07-13 21:09:34.101470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:20.481 [2024-07-13 21:09:34.101483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.481 [2024-07-13 21:09:34.101512] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:20.481 [2024-07-13 21:09:34.104907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.481 [2024-07-13 21:09:34.104965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:20.481 [2024-07-13 21:09:34.105021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.372 ms 00:18:20.481 [2024-07-13 21:09:34.105032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.481 [2024-07-13 21:09:34.105412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.481 [2024-07-13 21:09:34.105437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:20.481 [2024-07-13 21:09:34.105453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:18:20.481 [2024-07-13 21:09:34.105465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.481 [2024-07-13 21:09:34.109732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.481 [2024-07-13 21:09:34.109772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:20.481 [2024-07-13 21:09:34.109792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:18:20.481 [2024-07-13 21:09:34.109806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.481 [2024-07-13 21:09:34.117049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.481 [2024-07-13 21:09:34.117080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:20.481 [2024-07-13 21:09:34.117112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.161 ms 00:18:20.481 [2024-07-13 21:09:34.117122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.481 [2024-07-13 21:09:34.128919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.481 [2024-07-13 21:09:34.128975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:20.481 [2024-07-13 21:09:34.129021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.735 ms 00:18:20.481 [2024-07-13 21:09:34.129032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.481 [2024-07-13 21:09:34.137226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.481 [2024-07-13 21:09:34.137277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:20.481 [2024-07-13 21:09:34.137310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.145 ms 00:18:20.481 [2024-07-13 21:09:34.137323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.481 [2024-07-13 21:09:34.137462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.481 [2024-07-13 21:09:34.137480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:20.481 [2024-07-13 21:09:34.137494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:20.481 [2024-07-13 21:09:34.137504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.481 [2024-07-13 21:09:34.149730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.481 [2024-07-13 21:09:34.149765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:20.481 [2024-07-13 21:09:34.149799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.198 ms 00:18:20.481 [2024-07-13 21:09:34.149809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.481 [2024-07-13 21:09:34.161648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.481 [2024-07-13 21:09:34.161682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:20.481 [2024-07-13 21:09:34.161718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.780 ms 00:18:20.481 [2024-07-13 21:09:34.161728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.481 [2024-07-13 21:09:34.173169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.481 [2024-07-13 21:09:34.173202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:20.481 [2024-07-13 21:09:34.173234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.395 ms 00:18:20.481 [2024-07-13 21:09:34.173244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.481 [2024-07-13 21:09:34.184707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.481 [2024-07-13 21:09:34.184741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:20.481 [2024-07-13 21:09:34.184773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.391 ms 00:18:20.481 [2024-07-13 21:09:34.184783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.481 [2024-07-13 21:09:34.184826] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:20.481 [2024-07-13 21:09:34.184880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:20.481 [2024-07-13 21:09:34.184898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:20.481 [2024-07-13 21:09:34.184910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:20.481 [2024-07-13 21:09:34.184923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:20.481 [2024-07-13 21:09:34.184960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:20.481 [2024-07-13 21:09:34.184992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:20.481 [2024-07-13 21:09:34.185004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:20.481 [2024-07-13 21:09:34.185017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:20.481 [2024-07-13 21:09:34.185029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:20.481 [2024-07-13 21:09:34.185042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:20.481 [2024-07-13 21:09:34.185053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.185992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:20.482 [2024-07-13 21:09:34.186314] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:20.483 [2024-07-13 21:09:34.186341] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f37e351e-d87e-4dea-9055-a9b6d8866d11 00:18:20.483 [2024-07-13 21:09:34.186353] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:20.483 [2024-07-13 21:09:34.186368] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:20.483 [2024-07-13 21:09:34.186379] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:20.483 [2024-07-13 21:09:34.186392] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:20.483 [2024-07-13 21:09:34.186402] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:20.483 [2024-07-13 21:09:34.186415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:20.483 [2024-07-13 21:09:34.186427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:20.483 [2024-07-13 21:09:34.186439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:20.483 [2024-07-13 21:09:34.186449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:20.483 [2024-07-13 21:09:34.186462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.483 [2024-07-13 21:09:34.186473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:20.483 [2024-07-13 21:09:34.186487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.639 ms 00:18:20.483 [2024-07-13 21:09:34.186498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.202456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.483 [2024-07-13 21:09:34.202491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:20.483 [2024-07-13 21:09:34.202529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.913 ms 00:18:20.483 [2024-07-13 21:09:34.202539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.202799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.483 [2024-07-13 21:09:34.202820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:20.483 [2024-07-13 21:09:34.202849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:18:20.483 [2024-07-13 21:09:34.202863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.257996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.483 [2024-07-13 21:09:34.258058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:20.483 [2024-07-13 21:09:34.258091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.483 [2024-07-13 21:09:34.258101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.259649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.483 [2024-07-13 21:09:34.259684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:20.483 [2024-07-13 21:09:34.259703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.483 [2024-07-13 21:09:34.259715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.259787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.483 [2024-07-13 21:09:34.259806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:20.483 [2024-07-13 21:09:34.259823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.483 [2024-07-13 21:09:34.259847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.259879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.483 [2024-07-13 21:09:34.259893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:20.483 [2024-07-13 21:09:34.259907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.483 [2024-07-13 21:09:34.259918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.356936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.483 [2024-07-13 21:09:34.356997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:20.483 [2024-07-13 21:09:34.357033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.483 [2024-07-13 21:09:34.357044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.391534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.483 [2024-07-13 21:09:34.391571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:20.483 [2024-07-13 21:09:34.391621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.483 [2024-07-13 21:09:34.391632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.391704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.483 [2024-07-13 21:09:34.391721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:20.483 [2024-07-13 21:09:34.391736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.483 [2024-07-13 21:09:34.391746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.391781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.483 [2024-07-13 21:09:34.391793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:20.483 [2024-07-13 21:09:34.391805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.483 [2024-07-13 21:09:34.391815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.391977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.483 [2024-07-13 21:09:34.392013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:20.483 [2024-07-13 21:09:34.392027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.483 [2024-07-13 21:09:34.392038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.392090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.483 [2024-07-13 21:09:34.392131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:20.483 [2024-07-13 21:09:34.392162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.483 [2024-07-13 21:09:34.392173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.392221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.483 [2024-07-13 21:09:34.392238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:20.483 [2024-07-13 21:09:34.392254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.483 [2024-07-13 21:09:34.392266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.392337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.483 [2024-07-13 21:09:34.392354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:20.483 [2024-07-13 21:09:34.392368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.483 [2024-07-13 21:09:34.392379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.483 [2024-07-13 21:09:34.392555] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 291.159 ms, result 0 00:18:21.857 21:09:35 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:21.857 [2024-07-13 21:09:35.487170] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:21.857 [2024-07-13 21:09:35.487346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73226 ] 00:18:21.857 [2024-07-13 21:09:35.644533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.114 [2024-07-13 21:09:35.806699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.373 [2024-07-13 21:09:36.091686] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.373 [2024-07-13 21:09:36.091774] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.373 [2024-07-13 21:09:36.244627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.373 [2024-07-13 21:09:36.244673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.373 [2024-07-13 21:09:36.244708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:22.373 [2024-07-13 21:09:36.244722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.373 [2024-07-13 21:09:36.247810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.373 [2024-07-13 21:09:36.247876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.373 [2024-07-13 21:09:36.247909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.062 ms 00:18:22.373 [2024-07-13 21:09:36.247925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.373 [2024-07-13 21:09:36.248068] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.373 [2024-07-13 21:09:36.249131] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.373 [2024-07-13 21:09:36.249168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.373 [2024-07-13 21:09:36.249201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.373 [2024-07-13 21:09:36.249212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:18:22.373 [2024-07-13 21:09:36.249222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.373 [2024-07-13 21:09:36.250457] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:22.373 [2024-07-13 21:09:36.265100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.373 [2024-07-13 21:09:36.265152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:22.373 [2024-07-13 21:09:36.265185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.645 ms 00:18:22.373 [2024-07-13 21:09:36.265196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.373 [2024-07-13 21:09:36.265315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.373 [2024-07-13 21:09:36.265335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:22.373 [2024-07-13 21:09:36.265351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:22.373 [2024-07-13 21:09:36.265361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.373 [2024-07-13 21:09:36.269873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.373 [2024-07-13 21:09:36.269909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.373 [2024-07-13 21:09:36.269939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.452 ms 00:18:22.373 [2024-07-13 21:09:36.269949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.373 [2024-07-13 21:09:36.270087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.373 [2024-07-13 21:09:36.270106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.373 [2024-07-13 21:09:36.270118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:22.373 [2024-07-13 21:09:36.270128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.373 [2024-07-13 21:09:36.270166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.373 [2024-07-13 21:09:36.270180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.373 [2024-07-13 21:09:36.270190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:22.373 [2024-07-13 21:09:36.270200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.373 [2024-07-13 21:09:36.270230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:22.373 [2024-07-13 21:09:36.274330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.373 [2024-07-13 21:09:36.274365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.373 [2024-07-13 21:09:36.274396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.111 ms 00:18:22.373 [2024-07-13 21:09:36.274406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.373 [2024-07-13 21:09:36.274469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.373 [2024-07-13 21:09:36.274486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.373 [2024-07-13 21:09:36.274498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:22.373 [2024-07-13 21:09:36.274508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.373 [2024-07-13 21:09:36.274531] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:22.373 [2024-07-13 21:09:36.274574] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:22.373 [2024-07-13 21:09:36.274630] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:22.373 [2024-07-13 21:09:36.274653] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:22.373 [2024-07-13 21:09:36.274747] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:22.373 [2024-07-13 21:09:36.274762] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.373 [2024-07-13 21:09:36.274776] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:22.373 [2024-07-13 21:09:36.274791] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.373 [2024-07-13 21:09:36.274804] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.373 [2024-07-13 21:09:36.274817] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:22.373 [2024-07-13 21:09:36.274828] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.373 [2024-07-13 21:09:36.274839] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:22.373 [2024-07-13 21:09:36.274849] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:22.373 [2024-07-13 21:09:36.274860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.373 [2024-07-13 21:09:36.274876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.373 [2024-07-13 21:09:36.274888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:18:22.373 [2024-07-13 21:09:36.274919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.373 [2024-07-13 21:09:36.275048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.373 [2024-07-13 21:09:36.275078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.374 [2024-07-13 21:09:36.275091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:22.374 [2024-07-13 21:09:36.275102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.374 [2024-07-13 21:09:36.275182] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.374 [2024-07-13 21:09:36.275197] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.374 [2024-07-13 21:09:36.275212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.374 [2024-07-13 21:09:36.275223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.374 [2024-07-13 21:09:36.275234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.374 [2024-07-13 21:09:36.275244] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.374 [2024-07-13 21:09:36.275254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:22.374 [2024-07-13 21:09:36.275265] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.374 [2024-07-13 21:09:36.275276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:22.374 [2024-07-13 21:09:36.275285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.374 [2024-07-13 21:09:36.275309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.374 [2024-07-13 21:09:36.275319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:22.374 [2024-07-13 21:09:36.275328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.374 [2024-07-13 21:09:36.275338] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.374 [2024-07-13 21:09:36.275348] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:22.374 [2024-07-13 21:09:36.275358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.374 [2024-07-13 21:09:36.275367] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.374 [2024-07-13 21:09:36.275376] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:22.374 [2024-07-13 21:09:36.275386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.374 [2024-07-13 21:09:36.275407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:22.374 [2024-07-13 21:09:36.275417] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:22.374 [2024-07-13 21:09:36.275427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:22.374 [2024-07-13 21:09:36.275437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.374 [2024-07-13 21:09:36.275446] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:22.374 [2024-07-13 21:09:36.275456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:22.374 [2024-07-13 21:09:36.275481] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.374 [2024-07-13 21:09:36.275491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:22.374 [2024-07-13 21:09:36.275500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:22.374 [2024-07-13 21:09:36.275509] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.374 [2024-07-13 21:09:36.275519] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:22.374 [2024-07-13 21:09:36.275528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:22.374 [2024-07-13 21:09:36.275538] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.374 [2024-07-13 21:09:36.275548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:22.374 [2024-07-13 21:09:36.275558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:22.374 [2024-07-13 21:09:36.275567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.374 [2024-07-13 21:09:36.275577] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:22.374 [2024-07-13 21:09:36.275587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.374 [2024-07-13 21:09:36.275612] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.374 [2024-07-13 21:09:36.275622] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:22.374 [2024-07-13 21:09:36.275632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.374 [2024-07-13 21:09:36.275642] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.374 [2024-07-13 21:09:36.275653] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.374 [2024-07-13 21:09:36.275663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.374 [2024-07-13 21:09:36.275675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.374 [2024-07-13 21:09:36.275691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.374 [2024-07-13 21:09:36.275703] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.374 [2024-07-13 21:09:36.275713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.374 [2024-07-13 21:09:36.275724] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.374 [2024-07-13 21:09:36.275734] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.374 [2024-07-13 21:09:36.275744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.374 [2024-07-13 21:09:36.275755] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.374 [2024-07-13 21:09:36.275769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.374 [2024-07-13 21:09:36.275781] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:22.374 [2024-07-13 21:09:36.275792] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:22.374 [2024-07-13 21:09:36.275803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:22.374 [2024-07-13 21:09:36.275814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:22.374 [2024-07-13 21:09:36.275825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:22.374 [2024-07-13 21:09:36.275836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:22.374 [2024-07-13 21:09:36.275846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:22.374 [2024-07-13 21:09:36.275857] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:22.374 [2024-07-13 21:09:36.275868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:22.374 [2024-07-13 21:09:36.275879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:22.374 [2024-07-13 21:09:36.275891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:22.374 [2024-07-13 21:09:36.275902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:22.374 [2024-07-13 21:09:36.275913] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:22.374 [2024-07-13 21:09:36.275924] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.374 [2024-07-13 21:09:36.275972] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.374 [2024-07-13 21:09:36.275993] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.374 [2024-07-13 21:09:36.276004] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.374 [2024-07-13 21:09:36.276015] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.374 [2024-07-13 21:09:36.276026] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.374 [2024-07-13 21:09:36.276037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.374 [2024-07-13 21:09:36.276048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.374 [2024-07-13 21:09:36.276058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:18:22.374 [2024-07-13 21:09:36.276069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.374 [2024-07-13 21:09:36.294754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.374 [2024-07-13 21:09:36.294818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.374 [2024-07-13 21:09:36.294859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.564 ms 00:18:22.374 [2024-07-13 21:09:36.294875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.374 [2024-07-13 21:09:36.295049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.374 [2024-07-13 21:09:36.295116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.374 [2024-07-13 21:09:36.295130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:22.374 [2024-07-13 21:09:36.295141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.633 [2024-07-13 21:09:36.347067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.633 [2024-07-13 21:09:36.347115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.633 [2024-07-13 21:09:36.347150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.892 ms 00:18:22.633 [2024-07-13 21:09:36.347161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.633 [2024-07-13 21:09:36.347291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.633 [2024-07-13 21:09:36.347309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.633 [2024-07-13 21:09:36.347322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.633 [2024-07-13 21:09:36.347332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.633 [2024-07-13 21:09:36.347656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.633 [2024-07-13 21:09:36.347674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.633 [2024-07-13 21:09:36.347686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:18:22.633 [2024-07-13 21:09:36.347697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.633 [2024-07-13 21:09:36.347831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.633 [2024-07-13 21:09:36.347849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.633 [2024-07-13 21:09:36.347860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:22.633 [2024-07-13 21:09:36.347871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.633 [2024-07-13 21:09:36.364365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.633 [2024-07-13 21:09:36.364406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.634 [2024-07-13 21:09:36.364439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.423 ms 00:18:22.634 [2024-07-13 21:09:36.364466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.634 [2024-07-13 21:09:36.379757] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:22.634 [2024-07-13 21:09:36.379799] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:22.634 [2024-07-13 21:09:36.379831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.634 [2024-07-13 21:09:36.379843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:22.634 [2024-07-13 21:09:36.379904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.233 ms 00:18:22.634 [2024-07-13 21:09:36.379917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.634 [2024-07-13 21:09:36.407768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.634 [2024-07-13 21:09:36.407807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:22.634 [2024-07-13 21:09:36.407846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.745 ms 00:18:22.634 [2024-07-13 21:09:36.407890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.634 [2024-07-13 21:09:36.422697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.634 [2024-07-13 21:09:36.422734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:22.634 [2024-07-13 21:09:36.422764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.699 ms 00:18:22.634 [2024-07-13 21:09:36.422775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.634 [2024-07-13 21:09:36.437630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.634 [2024-07-13 21:09:36.437676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:22.634 [2024-07-13 21:09:36.437708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.721 ms 00:18:22.634 [2024-07-13 21:09:36.437718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.634 [2024-07-13 21:09:36.438266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.634 [2024-07-13 21:09:36.438301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.634 [2024-07-13 21:09:36.438317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:18:22.634 [2024-07-13 21:09:36.438329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.634 [2024-07-13 21:09:36.507798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.634 [2024-07-13 21:09:36.507876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:22.634 [2024-07-13 21:09:36.507913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.434 ms 00:18:22.634 [2024-07-13 21:09:36.507925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.634 [2024-07-13 21:09:36.519778] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:22.634 [2024-07-13 21:09:36.532333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.634 [2024-07-13 21:09:36.532393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.634 [2024-07-13 21:09:36.532413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.249 ms 00:18:22.634 [2024-07-13 21:09:36.532426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.634 [2024-07-13 21:09:36.532589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.634 [2024-07-13 21:09:36.532619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:22.634 [2024-07-13 21:09:36.532632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:22.634 [2024-07-13 21:09:36.532647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.634 [2024-07-13 21:09:36.532708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.634 [2024-07-13 21:09:36.532724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.634 [2024-07-13 21:09:36.532735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:22.634 [2024-07-13 21:09:36.532745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.634 [2024-07-13 21:09:36.534620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.634 [2024-07-13 21:09:36.534654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:22.634 [2024-07-13 21:09:36.534684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.851 ms 00:18:22.634 [2024-07-13 21:09:36.534695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.634 [2024-07-13 21:09:36.534731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.634 [2024-07-13 21:09:36.534744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.634 [2024-07-13 21:09:36.534761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:22.634 [2024-07-13 21:09:36.534772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.634 [2024-07-13 21:09:36.534810] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:22.634 [2024-07-13 21:09:36.534825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.634 [2024-07-13 21:09:36.534835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:22.634 [2024-07-13 21:09:36.534845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:22.634 [2024-07-13 21:09:36.534886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.891 [2024-07-13 21:09:36.564042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.891 [2024-07-13 21:09:36.564088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.891 [2024-07-13 21:09:36.564166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.106 ms 00:18:22.892 [2024-07-13 21:09:36.564179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.892 [2024-07-13 21:09:36.564311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.892 [2024-07-13 21:09:36.564332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.892 [2024-07-13 21:09:36.564347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:22.892 [2024-07-13 21:09:36.564359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.892 [2024-07-13 21:09:36.565446] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.892 [2024-07-13 21:09:36.569617] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.421 ms, result 0 00:18:22.892 [2024-07-13 21:09:36.570427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:22.892 [2024-07-13 21:09:36.586506] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:34.077  Copying: 26/256 [MB] (26 MBps) Copying: 50/256 [MB] (23 MBps) Copying: 72/256 [MB] (22 MBps) Copying: 95/256 [MB] (22 MBps) Copying: 118/256 [MB] (23 MBps) Copying: 141/256 [MB] (22 MBps) Copying: 162/256 [MB] (21 MBps) Copying: 184/256 [MB] (22 MBps) Copying: 208/256 [MB] (23 MBps) Copying: 232/256 [MB] (23 MBps) Copying: 255/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-13 21:09:47.987171] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:34.338 [2024-07-13 21:09:48.001181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.338 [2024-07-13 21:09:48.001252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:34.338 [2024-07-13 21:09:48.001300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:34.338 [2024-07-13 21:09:48.001324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.338 [2024-07-13 21:09:48.001359] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:34.338 [2024-07-13 21:09:48.005297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.338 [2024-07-13 21:09:48.005334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:34.338 [2024-07-13 21:09:48.005352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.914 ms 00:18:34.338 [2024-07-13 21:09:48.005363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.338 [2024-07-13 21:09:48.005706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.338 [2024-07-13 21:09:48.005734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:34.338 [2024-07-13 21:09:48.005756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:18:34.338 [2024-07-13 21:09:48.005768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.338 [2024-07-13 21:09:48.009854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.338 [2024-07-13 21:09:48.009892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:34.338 [2024-07-13 21:09:48.009909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.055 ms 00:18:34.338 [2024-07-13 21:09:48.009921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.338 [2024-07-13 21:09:48.018113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.338 [2024-07-13 21:09:48.018176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:34.338 [2024-07-13 21:09:48.018213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.147 ms 00:18:34.338 [2024-07-13 21:09:48.018225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.338 [2024-07-13 21:09:48.051238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.338 [2024-07-13 21:09:48.051293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:34.338 [2024-07-13 21:09:48.051329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.906 ms 00:18:34.338 [2024-07-13 21:09:48.051340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.338 [2024-07-13 21:09:48.067878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.338 [2024-07-13 21:09:48.067924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:34.338 [2024-07-13 21:09:48.067959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.472 ms 00:18:34.338 [2024-07-13 21:09:48.067970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.338 [2024-07-13 21:09:48.068173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.338 [2024-07-13 21:09:48.068207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:34.338 [2024-07-13 21:09:48.068223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:34.338 [2024-07-13 21:09:48.068243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.338 [2024-07-13 21:09:48.097690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.338 [2024-07-13 21:09:48.097731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:34.338 [2024-07-13 21:09:48.097778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.409 ms 00:18:34.338 [2024-07-13 21:09:48.097789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.338 [2024-07-13 21:09:48.126638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.338 [2024-07-13 21:09:48.126678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:34.338 [2024-07-13 21:09:48.126711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.771 ms 00:18:34.338 [2024-07-13 21:09:48.126722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.338 [2024-07-13 21:09:48.154914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.338 [2024-07-13 21:09:48.154953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:34.338 [2024-07-13 21:09:48.154986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.131 ms 00:18:34.338 [2024-07-13 21:09:48.154997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.338 [2024-07-13 21:09:48.183267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.338 [2024-07-13 21:09:48.183306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:34.338 [2024-07-13 21:09:48.183338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.177 ms 00:18:34.338 [2024-07-13 21:09:48.183348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.338 [2024-07-13 21:09:48.183407] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:34.338 [2024-07-13 21:09:48.183429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:34.338 [2024-07-13 21:09:48.183863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.183911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.183924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.183935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.183947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.183974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.183986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:34.339 [2024-07-13 21:09:48.184742] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:34.339 [2024-07-13 21:09:48.184768] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f37e351e-d87e-4dea-9055-a9b6d8866d11 00:18:34.339 [2024-07-13 21:09:48.184781] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:34.339 [2024-07-13 21:09:48.184792] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:34.339 [2024-07-13 21:09:48.184803] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:34.339 [2024-07-13 21:09:48.184814] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:34.339 [2024-07-13 21:09:48.184825] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:34.339 [2024-07-13 21:09:48.184836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:34.339 [2024-07-13 21:09:48.184865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:34.339 [2024-07-13 21:09:48.184878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:34.339 [2024-07-13 21:09:48.184889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:34.339 [2024-07-13 21:09:48.184901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.339 [2024-07-13 21:09:48.184912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:34.339 [2024-07-13 21:09:48.184925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:18:34.339 [2024-07-13 21:09:48.184936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.339 [2024-07-13 21:09:48.200765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.339 [2024-07-13 21:09:48.200801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:34.339 [2024-07-13 21:09:48.200834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.799 ms 00:18:34.339 [2024-07-13 21:09:48.200860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.339 [2024-07-13 21:09:48.201175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.339 [2024-07-13 21:09:48.201201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:34.339 [2024-07-13 21:09:48.201233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:18:34.339 [2024-07-13 21:09:48.201245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.339 [2024-07-13 21:09:48.247916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.339 [2024-07-13 21:09:48.247996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:34.339 [2024-07-13 21:09:48.248015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.339 [2024-07-13 21:09:48.248034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.339 [2024-07-13 21:09:48.248175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.339 [2024-07-13 21:09:48.248195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:34.339 [2024-07-13 21:09:48.248208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.339 [2024-07-13 21:09:48.248220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.339 [2024-07-13 21:09:48.248281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.339 [2024-07-13 21:09:48.248300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:34.339 [2024-07-13 21:09:48.248312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.339 [2024-07-13 21:09:48.248324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.339 [2024-07-13 21:09:48.248356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.339 [2024-07-13 21:09:48.248370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:34.339 [2024-07-13 21:09:48.248382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.339 [2024-07-13 21:09:48.248393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.599 [2024-07-13 21:09:48.340122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.600 [2024-07-13 21:09:48.340223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:34.600 [2024-07-13 21:09:48.340265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.600 [2024-07-13 21:09:48.340288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.600 [2024-07-13 21:09:48.388478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.600 [2024-07-13 21:09:48.388580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:34.600 [2024-07-13 21:09:48.388624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.600 [2024-07-13 21:09:48.388641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.600 [2024-07-13 21:09:48.388782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.600 [2024-07-13 21:09:48.388808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.600 [2024-07-13 21:09:48.388825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.600 [2024-07-13 21:09:48.388841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.600 [2024-07-13 21:09:48.388954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.600 [2024-07-13 21:09:48.388988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.600 [2024-07-13 21:09:48.389007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.600 [2024-07-13 21:09:48.389022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.600 [2024-07-13 21:09:48.389184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.600 [2024-07-13 21:09:48.389209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.600 [2024-07-13 21:09:48.389227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.600 [2024-07-13 21:09:48.389243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.600 [2024-07-13 21:09:48.389344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.600 [2024-07-13 21:09:48.389366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:34.600 [2024-07-13 21:09:48.389390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.600 [2024-07-13 21:09:48.389405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.600 [2024-07-13 21:09:48.389466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.600 [2024-07-13 21:09:48.389486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.600 [2024-07-13 21:09:48.389502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.600 [2024-07-13 21:09:48.389517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.600 [2024-07-13 21:09:48.389587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.600 [2024-07-13 21:09:48.389632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.600 [2024-07-13 21:09:48.389653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.600 [2024-07-13 21:09:48.389670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.600 [2024-07-13 21:09:48.389890] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 388.720 ms, result 0 00:18:35.535 00:18:35.535 00:18:35.535 21:09:49 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:36.101 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:36.101 21:09:49 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:36.101 21:09:49 -- ftl/trim.sh@109 -- # fio_kill 00:18:36.101 21:09:49 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:36.101 21:09:50 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:36.101 21:09:50 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:36.359 21:09:50 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:36.359 Process with pid 73155 is not found 00:18:36.359 21:09:50 -- ftl/trim.sh@20 -- # killprocess 73155 00:18:36.359 21:09:50 -- common/autotest_common.sh@926 -- # '[' -z 73155 ']' 00:18:36.359 21:09:50 -- common/autotest_common.sh@930 -- # kill -0 73155 00:18:36.359 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (73155) - No such process 00:18:36.359 21:09:50 -- common/autotest_common.sh@953 -- # echo 'Process with pid 73155 is not found' 00:18:36.359 ************************************ 00:18:36.359 END TEST ftl_trim 00:18:36.359 ************************************ 00:18:36.359 00:18:36.359 real 1m11.314s 00:18:36.359 user 1m37.096s 00:18:36.359 sys 0m6.466s 00:18:36.359 21:09:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:36.359 21:09:50 -- common/autotest_common.sh@10 -- # set +x 00:18:36.359 21:09:50 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:18:36.359 21:09:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:36.359 21:09:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:36.359 21:09:50 -- common/autotest_common.sh@10 -- # set +x 00:18:36.359 ************************************ 00:18:36.359 START TEST ftl_restore 00:18:36.360 ************************************ 00:18:36.360 21:09:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:18:36.360 * Looking for test storage... 00:18:36.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.360 21:09:50 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:36.360 21:09:50 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:36.360 21:09:50 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.360 21:09:50 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.360 21:09:50 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:36.360 21:09:50 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:36.360 21:09:50 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.360 21:09:50 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:36.360 21:09:50 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:36.360 21:09:50 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.360 21:09:50 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.360 21:09:50 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:36.360 21:09:50 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:36.360 21:09:50 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.360 21:09:50 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.360 21:09:50 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:36.360 21:09:50 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:36.360 21:09:50 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.360 21:09:50 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.360 21:09:50 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:36.360 21:09:50 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:36.360 21:09:50 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.360 21:09:50 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.360 21:09:50 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.360 21:09:50 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.360 21:09:50 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:36.360 21:09:50 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:36.360 21:09:50 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.360 21:09:50 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.360 21:09:50 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.360 21:09:50 -- ftl/restore.sh@13 -- # mktemp -d 00:18:36.360 21:09:50 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Nd6xxYTopP 00:18:36.360 21:09:50 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:36.360 21:09:50 -- ftl/restore.sh@16 -- # case $opt in 00:18:36.360 21:09:50 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:18:36.360 21:09:50 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:36.360 21:09:50 -- ftl/restore.sh@23 -- # shift 2 00:18:36.360 21:09:50 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:18:36.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.360 21:09:50 -- ftl/restore.sh@25 -- # timeout=240 00:18:36.360 21:09:50 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:36.360 21:09:50 -- ftl/restore.sh@39 -- # svcpid=73436 00:18:36.360 21:09:50 -- ftl/restore.sh@41 -- # waitforlisten 73436 00:18:36.360 21:09:50 -- common/autotest_common.sh@819 -- # '[' -z 73436 ']' 00:18:36.360 21:09:50 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.360 21:09:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.360 21:09:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:36.360 21:09:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.360 21:09:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:36.360 21:09:50 -- common/autotest_common.sh@10 -- # set +x 00:18:36.618 [2024-07-13 21:09:50.356163] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:36.618 [2024-07-13 21:09:50.356328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73436 ] 00:18:36.618 [2024-07-13 21:09:50.527584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.877 [2024-07-13 21:09:50.739187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:36.877 [2024-07-13 21:09:50.739413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.250 21:09:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:38.250 21:09:51 -- common/autotest_common.sh@852 -- # return 0 00:18:38.250 21:09:51 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:38.250 21:09:51 -- ftl/common.sh@54 -- # local name=nvme0 00:18:38.250 21:09:51 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:38.250 21:09:51 -- ftl/common.sh@56 -- # local size=103424 00:18:38.250 21:09:51 -- ftl/common.sh@59 -- # local base_bdev 00:18:38.250 21:09:51 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:38.508 21:09:52 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:38.508 21:09:52 -- ftl/common.sh@62 -- # local base_size 00:18:38.508 21:09:52 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:38.508 21:09:52 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:18:38.508 21:09:52 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:38.508 21:09:52 -- common/autotest_common.sh@1359 -- # local bs 00:18:38.508 21:09:52 -- common/autotest_common.sh@1360 -- # local nb 00:18:38.508 21:09:52 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:38.767 21:09:52 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:38.767 { 00:18:38.767 "name": "nvme0n1", 00:18:38.767 "aliases": [ 00:18:38.767 "43a3e800-41f0-46d4-9589-9ae346fcc45f" 00:18:38.767 ], 00:18:38.767 "product_name": "NVMe disk", 00:18:38.767 "block_size": 4096, 00:18:38.767 "num_blocks": 1310720, 00:18:38.767 "uuid": "43a3e800-41f0-46d4-9589-9ae346fcc45f", 00:18:38.767 "assigned_rate_limits": { 00:18:38.767 "rw_ios_per_sec": 0, 00:18:38.767 "rw_mbytes_per_sec": 0, 00:18:38.767 "r_mbytes_per_sec": 0, 00:18:38.767 "w_mbytes_per_sec": 0 00:18:38.767 }, 00:18:38.767 "claimed": true, 00:18:38.767 "claim_type": "read_many_write_one", 00:18:38.767 "zoned": false, 00:18:38.767 "supported_io_types": { 00:18:38.767 "read": true, 00:18:38.767 "write": true, 00:18:38.767 "unmap": true, 00:18:38.767 "write_zeroes": true, 00:18:38.767 "flush": true, 00:18:38.767 "reset": true, 00:18:38.767 "compare": true, 00:18:38.767 "compare_and_write": false, 00:18:38.767 "abort": true, 00:18:38.767 "nvme_admin": true, 00:18:38.767 "nvme_io": true 00:18:38.767 }, 00:18:38.767 "driver_specific": { 00:18:38.767 "nvme": [ 00:18:38.767 { 00:18:38.767 "pci_address": "0000:00:07.0", 00:18:38.767 "trid": { 00:18:38.767 "trtype": "PCIe", 00:18:38.767 "traddr": "0000:00:07.0" 00:18:38.767 }, 00:18:38.767 "ctrlr_data": { 00:18:38.767 "cntlid": 0, 00:18:38.767 "vendor_id": "0x1b36", 00:18:38.767 "model_number": "QEMU NVMe Ctrl", 00:18:38.767 "serial_number": "12341", 00:18:38.767 "firmware_revision": "8.0.0", 00:18:38.767 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:38.767 "oacs": { 00:18:38.767 "security": 0, 00:18:38.767 "format": 1, 00:18:38.767 "firmware": 0, 00:18:38.767 "ns_manage": 1 00:18:38.767 }, 00:18:38.767 "multi_ctrlr": false, 00:18:38.767 "ana_reporting": false 00:18:38.767 }, 00:18:38.767 "vs": { 00:18:38.767 "nvme_version": "1.4" 00:18:38.767 }, 00:18:38.767 "ns_data": { 00:18:38.767 "id": 1, 00:18:38.767 "can_share": false 00:18:38.767 } 00:18:38.767 } 00:18:38.767 ], 00:18:38.767 "mp_policy": "active_passive" 00:18:38.767 } 00:18:38.767 } 00:18:38.767 ]' 00:18:38.767 21:09:52 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:38.767 21:09:52 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:38.767 21:09:52 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:38.767 21:09:52 -- common/autotest_common.sh@1363 -- # nb=1310720 00:18:38.767 21:09:52 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:18:38.767 21:09:52 -- common/autotest_common.sh@1367 -- # echo 5120 00:18:38.767 21:09:52 -- ftl/common.sh@63 -- # base_size=5120 00:18:38.767 21:09:52 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:38.767 21:09:52 -- ftl/common.sh@67 -- # clear_lvols 00:18:38.767 21:09:52 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:38.767 21:09:52 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:39.025 21:09:52 -- ftl/common.sh@28 -- # stores=d8281d7a-f3fd-4e6e-a596-dce698955776 00:18:39.025 21:09:52 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:39.025 21:09:52 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d8281d7a-f3fd-4e6e-a596-dce698955776 00:18:39.283 21:09:53 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:39.540 21:09:53 -- ftl/common.sh@68 -- # lvs=c0ff64b5-7271-4fa6-b4bb-96dec14f331f 00:18:39.540 21:09:53 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c0ff64b5-7271-4fa6-b4bb-96dec14f331f 00:18:39.798 21:09:53 -- ftl/restore.sh@43 -- # split_bdev=5deca72d-a8b7-42a0-a810-18b5b7a42e71 00:18:39.798 21:09:53 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:18:39.798 21:09:53 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 5deca72d-a8b7-42a0-a810-18b5b7a42e71 00:18:39.798 21:09:53 -- ftl/common.sh@35 -- # local name=nvc0 00:18:39.798 21:09:53 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:39.798 21:09:53 -- ftl/common.sh@37 -- # local base_bdev=5deca72d-a8b7-42a0-a810-18b5b7a42e71 00:18:39.798 21:09:53 -- ftl/common.sh@38 -- # local cache_size= 00:18:39.798 21:09:53 -- ftl/common.sh@41 -- # get_bdev_size 5deca72d-a8b7-42a0-a810-18b5b7a42e71 00:18:39.798 21:09:53 -- common/autotest_common.sh@1357 -- # local bdev_name=5deca72d-a8b7-42a0-a810-18b5b7a42e71 00:18:39.798 21:09:53 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:39.798 21:09:53 -- common/autotest_common.sh@1359 -- # local bs 00:18:39.798 21:09:53 -- common/autotest_common.sh@1360 -- # local nb 00:18:39.798 21:09:53 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5deca72d-a8b7-42a0-a810-18b5b7a42e71 00:18:40.056 21:09:53 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:40.056 { 00:18:40.056 "name": "5deca72d-a8b7-42a0-a810-18b5b7a42e71", 00:18:40.056 "aliases": [ 00:18:40.056 "lvs/nvme0n1p0" 00:18:40.056 ], 00:18:40.056 "product_name": "Logical Volume", 00:18:40.056 "block_size": 4096, 00:18:40.056 "num_blocks": 26476544, 00:18:40.056 "uuid": "5deca72d-a8b7-42a0-a810-18b5b7a42e71", 00:18:40.056 "assigned_rate_limits": { 00:18:40.056 "rw_ios_per_sec": 0, 00:18:40.056 "rw_mbytes_per_sec": 0, 00:18:40.056 "r_mbytes_per_sec": 0, 00:18:40.056 "w_mbytes_per_sec": 0 00:18:40.056 }, 00:18:40.056 "claimed": false, 00:18:40.056 "zoned": false, 00:18:40.056 "supported_io_types": { 00:18:40.056 "read": true, 00:18:40.056 "write": true, 00:18:40.056 "unmap": true, 00:18:40.056 "write_zeroes": true, 00:18:40.056 "flush": false, 00:18:40.056 "reset": true, 00:18:40.056 "compare": false, 00:18:40.056 "compare_and_write": false, 00:18:40.056 "abort": false, 00:18:40.056 "nvme_admin": false, 00:18:40.056 "nvme_io": false 00:18:40.056 }, 00:18:40.056 "driver_specific": { 00:18:40.056 "lvol": { 00:18:40.056 "lvol_store_uuid": "c0ff64b5-7271-4fa6-b4bb-96dec14f331f", 00:18:40.056 "base_bdev": "nvme0n1", 00:18:40.056 "thin_provision": true, 00:18:40.056 "snapshot": false, 00:18:40.056 "clone": false, 00:18:40.056 "esnap_clone": false 00:18:40.056 } 00:18:40.056 } 00:18:40.056 } 00:18:40.056 ]' 00:18:40.056 21:09:53 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:40.056 21:09:53 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:40.056 21:09:53 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:40.056 21:09:53 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:40.056 21:09:53 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:40.056 21:09:53 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:40.056 21:09:53 -- ftl/common.sh@41 -- # local base_size=5171 00:18:40.056 21:09:53 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:40.056 21:09:53 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:40.623 21:09:54 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:40.623 21:09:54 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:40.623 21:09:54 -- ftl/common.sh@48 -- # get_bdev_size 5deca72d-a8b7-42a0-a810-18b5b7a42e71 00:18:40.623 21:09:54 -- common/autotest_common.sh@1357 -- # local bdev_name=5deca72d-a8b7-42a0-a810-18b5b7a42e71 00:18:40.623 21:09:54 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:40.623 21:09:54 -- common/autotest_common.sh@1359 -- # local bs 00:18:40.623 21:09:54 -- common/autotest_common.sh@1360 -- # local nb 00:18:40.623 21:09:54 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5deca72d-a8b7-42a0-a810-18b5b7a42e71 00:18:40.623 21:09:54 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:40.623 { 00:18:40.623 "name": "5deca72d-a8b7-42a0-a810-18b5b7a42e71", 00:18:40.623 "aliases": [ 00:18:40.623 "lvs/nvme0n1p0" 00:18:40.623 ], 00:18:40.623 "product_name": "Logical Volume", 00:18:40.623 "block_size": 4096, 00:18:40.623 "num_blocks": 26476544, 00:18:40.623 "uuid": "5deca72d-a8b7-42a0-a810-18b5b7a42e71", 00:18:40.623 "assigned_rate_limits": { 00:18:40.623 "rw_ios_per_sec": 0, 00:18:40.623 "rw_mbytes_per_sec": 0, 00:18:40.623 "r_mbytes_per_sec": 0, 00:18:40.623 "w_mbytes_per_sec": 0 00:18:40.623 }, 00:18:40.623 "claimed": false, 00:18:40.623 "zoned": false, 00:18:40.623 "supported_io_types": { 00:18:40.623 "read": true, 00:18:40.623 "write": true, 00:18:40.623 "unmap": true, 00:18:40.623 "write_zeroes": true, 00:18:40.623 "flush": false, 00:18:40.623 "reset": true, 00:18:40.623 "compare": false, 00:18:40.623 "compare_and_write": false, 00:18:40.623 "abort": false, 00:18:40.623 "nvme_admin": false, 00:18:40.623 "nvme_io": false 00:18:40.623 }, 00:18:40.623 "driver_specific": { 00:18:40.623 "lvol": { 00:18:40.623 "lvol_store_uuid": "c0ff64b5-7271-4fa6-b4bb-96dec14f331f", 00:18:40.623 "base_bdev": "nvme0n1", 00:18:40.623 "thin_provision": true, 00:18:40.623 "snapshot": false, 00:18:40.623 "clone": false, 00:18:40.623 "esnap_clone": false 00:18:40.623 } 00:18:40.623 } 00:18:40.623 } 00:18:40.623 ]' 00:18:40.623 21:09:54 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:40.881 21:09:54 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:40.881 21:09:54 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:40.881 21:09:54 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:40.881 21:09:54 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:40.881 21:09:54 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:40.881 21:09:54 -- ftl/common.sh@48 -- # cache_size=5171 00:18:40.881 21:09:54 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:41.138 21:09:54 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:41.138 21:09:54 -- ftl/restore.sh@48 -- # get_bdev_size 5deca72d-a8b7-42a0-a810-18b5b7a42e71 00:18:41.138 21:09:54 -- common/autotest_common.sh@1357 -- # local bdev_name=5deca72d-a8b7-42a0-a810-18b5b7a42e71 00:18:41.138 21:09:54 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:41.138 21:09:54 -- common/autotest_common.sh@1359 -- # local bs 00:18:41.138 21:09:54 -- common/autotest_common.sh@1360 -- # local nb 00:18:41.138 21:09:54 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5deca72d-a8b7-42a0-a810-18b5b7a42e71 00:18:41.395 21:09:55 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:41.395 { 00:18:41.395 "name": "5deca72d-a8b7-42a0-a810-18b5b7a42e71", 00:18:41.395 "aliases": [ 00:18:41.395 "lvs/nvme0n1p0" 00:18:41.395 ], 00:18:41.395 "product_name": "Logical Volume", 00:18:41.395 "block_size": 4096, 00:18:41.395 "num_blocks": 26476544, 00:18:41.395 "uuid": "5deca72d-a8b7-42a0-a810-18b5b7a42e71", 00:18:41.395 "assigned_rate_limits": { 00:18:41.395 "rw_ios_per_sec": 0, 00:18:41.395 "rw_mbytes_per_sec": 0, 00:18:41.395 "r_mbytes_per_sec": 0, 00:18:41.395 "w_mbytes_per_sec": 0 00:18:41.395 }, 00:18:41.395 "claimed": false, 00:18:41.395 "zoned": false, 00:18:41.395 "supported_io_types": { 00:18:41.395 "read": true, 00:18:41.395 "write": true, 00:18:41.395 "unmap": true, 00:18:41.395 "write_zeroes": true, 00:18:41.395 "flush": false, 00:18:41.395 "reset": true, 00:18:41.395 "compare": false, 00:18:41.395 "compare_and_write": false, 00:18:41.395 "abort": false, 00:18:41.395 "nvme_admin": false, 00:18:41.395 "nvme_io": false 00:18:41.395 }, 00:18:41.395 "driver_specific": { 00:18:41.395 "lvol": { 00:18:41.395 "lvol_store_uuid": "c0ff64b5-7271-4fa6-b4bb-96dec14f331f", 00:18:41.395 "base_bdev": "nvme0n1", 00:18:41.395 "thin_provision": true, 00:18:41.395 "snapshot": false, 00:18:41.395 "clone": false, 00:18:41.395 "esnap_clone": false 00:18:41.395 } 00:18:41.395 } 00:18:41.395 } 00:18:41.395 ]' 00:18:41.395 21:09:55 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:41.395 21:09:55 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:41.395 21:09:55 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:41.395 21:09:55 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:41.395 21:09:55 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:41.395 21:09:55 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:41.395 21:09:55 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:41.395 21:09:55 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5deca72d-a8b7-42a0-a810-18b5b7a42e71 --l2p_dram_limit 10' 00:18:41.395 21:09:55 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:41.395 21:09:55 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:18:41.395 21:09:55 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:41.395 21:09:55 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:41.395 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:41.395 21:09:55 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5deca72d-a8b7-42a0-a810-18b5b7a42e71 --l2p_dram_limit 10 -c nvc0n1p0 00:18:41.654 [2024-07-13 21:09:55.425438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.654 [2024-07-13 21:09:55.425496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:41.654 [2024-07-13 21:09:55.425521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:41.654 [2024-07-13 21:09:55.425534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.654 [2024-07-13 21:09:55.425614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.654 [2024-07-13 21:09:55.425633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:41.654 [2024-07-13 21:09:55.425649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:41.654 [2024-07-13 21:09:55.425661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.654 [2024-07-13 21:09:55.425693] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:41.654 [2024-07-13 21:09:55.426701] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:41.654 [2024-07-13 21:09:55.426744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.654 [2024-07-13 21:09:55.426759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:41.654 [2024-07-13 21:09:55.426774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:18:41.654 [2024-07-13 21:09:55.426786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.654 [2024-07-13 21:09:55.426923] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2e284b7a-a9ff-453f-932a-f2e27dc3593a 00:18:41.654 [2024-07-13 21:09:55.427978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.654 [2024-07-13 21:09:55.428024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:41.654 [2024-07-13 21:09:55.428041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:41.654 [2024-07-13 21:09:55.428058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.654 [2024-07-13 21:09:55.432387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.654 [2024-07-13 21:09:55.432437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:41.654 [2024-07-13 21:09:55.432454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.272 ms 00:18:41.654 [2024-07-13 21:09:55.432469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.654 [2024-07-13 21:09:55.432588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.654 [2024-07-13 21:09:55.432610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:41.654 [2024-07-13 21:09:55.432623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:41.654 [2024-07-13 21:09:55.432642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.654 [2024-07-13 21:09:55.432719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.654 [2024-07-13 21:09:55.432741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:41.654 [2024-07-13 21:09:55.432754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:41.654 [2024-07-13 21:09:55.432771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.654 [2024-07-13 21:09:55.432808] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:41.654 [2024-07-13 21:09:55.437298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.654 [2024-07-13 21:09:55.437341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:41.654 [2024-07-13 21:09:55.437360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.499 ms 00:18:41.654 [2024-07-13 21:09:55.437373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.654 [2024-07-13 21:09:55.437423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.654 [2024-07-13 21:09:55.437439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:41.654 [2024-07-13 21:09:55.437454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:41.654 [2024-07-13 21:09:55.437466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.654 [2024-07-13 21:09:55.437523] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:41.654 [2024-07-13 21:09:55.437661] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:41.655 [2024-07-13 21:09:55.437685] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:41.655 [2024-07-13 21:09:55.437701] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:41.655 [2024-07-13 21:09:55.437719] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:41.655 [2024-07-13 21:09:55.437733] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:41.655 [2024-07-13 21:09:55.437748] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:41.655 [2024-07-13 21:09:55.437759] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:41.655 [2024-07-13 21:09:55.437773] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:41.655 [2024-07-13 21:09:55.437788] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:41.655 [2024-07-13 21:09:55.437802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.655 [2024-07-13 21:09:55.437814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:41.655 [2024-07-13 21:09:55.437863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:18:41.655 [2024-07-13 21:09:55.437878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.655 [2024-07-13 21:09:55.437958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.655 [2024-07-13 21:09:55.437973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:41.655 [2024-07-13 21:09:55.437987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:41.655 [2024-07-13 21:09:55.438000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.655 [2024-07-13 21:09:55.438092] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:41.655 [2024-07-13 21:09:55.438108] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:41.655 [2024-07-13 21:09:55.438123] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.655 [2024-07-13 21:09:55.438138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.655 [2024-07-13 21:09:55.438153] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:41.655 [2024-07-13 21:09:55.438165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:41.655 [2024-07-13 21:09:55.438178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:41.655 [2024-07-13 21:09:55.438189] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:41.655 [2024-07-13 21:09:55.438205] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:41.655 [2024-07-13 21:09:55.438224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.655 [2024-07-13 21:09:55.438249] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:41.655 [2024-07-13 21:09:55.438268] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:41.655 [2024-07-13 21:09:55.438293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.655 [2024-07-13 21:09:55.438312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:41.655 [2024-07-13 21:09:55.438327] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:41.655 [2024-07-13 21:09:55.438338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.655 [2024-07-13 21:09:55.438354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:41.655 [2024-07-13 21:09:55.438365] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:41.655 [2024-07-13 21:09:55.438379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.655 [2024-07-13 21:09:55.438391] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:41.655 [2024-07-13 21:09:55.438403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:41.655 [2024-07-13 21:09:55.438415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:41.655 [2024-07-13 21:09:55.438427] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:41.655 [2024-07-13 21:09:55.438438] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:41.655 [2024-07-13 21:09:55.438451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:41.655 [2024-07-13 21:09:55.438462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:41.655 [2024-07-13 21:09:55.438475] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:41.655 [2024-07-13 21:09:55.438485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:41.655 [2024-07-13 21:09:55.438497] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:41.655 [2024-07-13 21:09:55.438509] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:41.655 [2024-07-13 21:09:55.438530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:41.655 [2024-07-13 21:09:55.438550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:41.655 [2024-07-13 21:09:55.438576] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:41.655 [2024-07-13 21:09:55.438593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:41.655 [2024-07-13 21:09:55.438606] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:41.655 [2024-07-13 21:09:55.438618] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:41.655 [2024-07-13 21:09:55.438630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.655 [2024-07-13 21:09:55.438642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:41.655 [2024-07-13 21:09:55.438657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:41.655 [2024-07-13 21:09:55.438668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.655 [2024-07-13 21:09:55.438680] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:41.655 [2024-07-13 21:09:55.438692] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:41.655 [2024-07-13 21:09:55.438706] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.655 [2024-07-13 21:09:55.438717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.655 [2024-07-13 21:09:55.438732] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:41.655 [2024-07-13 21:09:55.438743] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:41.655 [2024-07-13 21:09:55.438756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:41.655 [2024-07-13 21:09:55.438767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:41.655 [2024-07-13 21:09:55.438947] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:41.655 [2024-07-13 21:09:55.438971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:41.655 [2024-07-13 21:09:55.438988] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:41.655 [2024-07-13 21:09:55.439004] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.655 [2024-07-13 21:09:55.439023] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:41.655 [2024-07-13 21:09:55.439036] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:41.655 [2024-07-13 21:09:55.439049] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:41.655 [2024-07-13 21:09:55.439061] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:41.655 [2024-07-13 21:09:55.439075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:41.655 [2024-07-13 21:09:55.439087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:41.655 [2024-07-13 21:09:55.439101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:41.655 [2024-07-13 21:09:55.439112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:41.655 [2024-07-13 21:09:55.439126] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:41.655 [2024-07-13 21:09:55.439138] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:41.655 [2024-07-13 21:09:55.439152] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:41.655 [2024-07-13 21:09:55.439171] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:41.655 [2024-07-13 21:09:55.439201] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:41.655 [2024-07-13 21:09:55.439216] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:41.655 [2024-07-13 21:09:55.439231] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.655 [2024-07-13 21:09:55.439245] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:41.655 [2024-07-13 21:09:55.439259] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:41.655 [2024-07-13 21:09:55.439271] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:41.655 [2024-07-13 21:09:55.439285] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:41.655 [2024-07-13 21:09:55.439298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.655 [2024-07-13 21:09:55.439313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:41.655 [2024-07-13 21:09:55.439326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:18:41.655 [2024-07-13 21:09:55.439340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.655 [2024-07-13 21:09:55.457306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.655 [2024-07-13 21:09:55.457360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:41.655 [2024-07-13 21:09:55.457379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.907 ms 00:18:41.655 [2024-07-13 21:09:55.457394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.655 [2024-07-13 21:09:55.457502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.655 [2024-07-13 21:09:55.457523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:41.655 [2024-07-13 21:09:55.457536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:41.655 [2024-07-13 21:09:55.457550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.655 [2024-07-13 21:09:55.495991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.655 [2024-07-13 21:09:55.496046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:41.655 [2024-07-13 21:09:55.496066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.370 ms 00:18:41.655 [2024-07-13 21:09:55.496081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.655 [2024-07-13 21:09:55.496144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.655 [2024-07-13 21:09:55.496167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:41.656 [2024-07-13 21:09:55.496180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:41.656 [2024-07-13 21:09:55.496194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.656 [2024-07-13 21:09:55.496574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.656 [2024-07-13 21:09:55.496600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:41.656 [2024-07-13 21:09:55.496613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:18:41.656 [2024-07-13 21:09:55.496627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.656 [2024-07-13 21:09:55.496768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.656 [2024-07-13 21:09:55.496791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:41.656 [2024-07-13 21:09:55.496803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:41.656 [2024-07-13 21:09:55.496818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.656 [2024-07-13 21:09:55.514700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.656 [2024-07-13 21:09:55.514749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:41.656 [2024-07-13 21:09:55.514768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.835 ms 00:18:41.656 [2024-07-13 21:09:55.514782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.656 [2024-07-13 21:09:55.528266] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:41.656 [2024-07-13 21:09:55.530951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.656 [2024-07-13 21:09:55.530991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:41.656 [2024-07-13 21:09:55.531012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.035 ms 00:18:41.656 [2024-07-13 21:09:55.531025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.913 [2024-07-13 21:09:55.591171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.913 [2024-07-13 21:09:55.591244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:41.913 [2024-07-13 21:09:55.591270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.103 ms 00:18:41.913 [2024-07-13 21:09:55.591284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.913 [2024-07-13 21:09:55.591351] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:41.913 [2024-07-13 21:09:55.591373] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:43.832 [2024-07-13 21:09:57.651665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.832 [2024-07-13 21:09:57.651747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:43.832 [2024-07-13 21:09:57.651779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2060.317 ms 00:18:43.832 [2024-07-13 21:09:57.651795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.832 [2024-07-13 21:09:57.652106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.832 [2024-07-13 21:09:57.652145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:43.832 [2024-07-13 21:09:57.652167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:18:43.832 [2024-07-13 21:09:57.652182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.832 [2024-07-13 21:09:57.690091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.832 [2024-07-13 21:09:57.690142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:43.832 [2024-07-13 21:09:57.690169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.815 ms 00:18:43.832 [2024-07-13 21:09:57.690186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.832 [2024-07-13 21:09:57.727720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.832 [2024-07-13 21:09:57.727771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:43.832 [2024-07-13 21:09:57.727801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.468 ms 00:18:43.832 [2024-07-13 21:09:57.727817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.832 [2024-07-13 21:09:57.728344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.832 [2024-07-13 21:09:57.728389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:43.832 [2024-07-13 21:09:57.728413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:18:43.832 [2024-07-13 21:09:57.728429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.090 [2024-07-13 21:09:57.821705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.090 [2024-07-13 21:09:57.821772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:44.090 [2024-07-13 21:09:57.821801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.191 ms 00:18:44.090 [2024-07-13 21:09:57.821817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.090 [2024-07-13 21:09:57.860765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.090 [2024-07-13 21:09:57.860825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:44.090 [2024-07-13 21:09:57.860870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.862 ms 00:18:44.090 [2024-07-13 21:09:57.860892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.091 [2024-07-13 21:09:57.863305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.091 [2024-07-13 21:09:57.863356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:44.091 [2024-07-13 21:09:57.863383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.348 ms 00:18:44.091 [2024-07-13 21:09:57.863398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.091 [2024-07-13 21:09:57.901629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.091 [2024-07-13 21:09:57.901680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:44.091 [2024-07-13 21:09:57.901705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.144 ms 00:18:44.091 [2024-07-13 21:09:57.901721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.091 [2024-07-13 21:09:57.901799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.091 [2024-07-13 21:09:57.901823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:44.091 [2024-07-13 21:09:57.901866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:44.091 [2024-07-13 21:09:57.901885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.091 [2024-07-13 21:09:57.902035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.091 [2024-07-13 21:09:57.902059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:44.091 [2024-07-13 21:09:57.902081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:44.091 [2024-07-13 21:09:57.902095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.091 [2024-07-13 21:09:57.903277] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2477.255 ms, result 0 00:18:44.091 { 00:18:44.091 "name": "ftl0", 00:18:44.091 "uuid": "2e284b7a-a9ff-453f-932a-f2e27dc3593a" 00:18:44.091 } 00:18:44.091 21:09:57 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:44.091 21:09:57 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:44.349 21:09:58 -- ftl/restore.sh@63 -- # echo ']}' 00:18:44.349 21:09:58 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:44.607 [2024-07-13 21:09:58.430724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.607 [2024-07-13 21:09:58.430801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:44.607 [2024-07-13 21:09:58.430823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:44.607 [2024-07-13 21:09:58.430859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.607 [2024-07-13 21:09:58.430902] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.607 [2024-07-13 21:09:58.434198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.607 [2024-07-13 21:09:58.434233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:44.607 [2024-07-13 21:09:58.434252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.267 ms 00:18:44.607 [2024-07-13 21:09:58.434265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.607 [2024-07-13 21:09:58.434614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.607 [2024-07-13 21:09:58.434641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:44.607 [2024-07-13 21:09:58.434662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:18:44.607 [2024-07-13 21:09:58.434675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.607 [2024-07-13 21:09:58.438047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.607 [2024-07-13 21:09:58.438079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:44.607 [2024-07-13 21:09:58.438097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.330 ms 00:18:44.607 [2024-07-13 21:09:58.438110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.607 [2024-07-13 21:09:58.444792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.607 [2024-07-13 21:09:58.444824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:44.607 [2024-07-13 21:09:58.444856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.649 ms 00:18:44.607 [2024-07-13 21:09:58.444873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.607 [2024-07-13 21:09:58.476019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.607 [2024-07-13 21:09:58.476063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:44.607 [2024-07-13 21:09:58.476084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.043 ms 00:18:44.607 [2024-07-13 21:09:58.476097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.607 [2024-07-13 21:09:58.494624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.607 [2024-07-13 21:09:58.494669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:44.607 [2024-07-13 21:09:58.494691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.463 ms 00:18:44.607 [2024-07-13 21:09:58.494705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.607 [2024-07-13 21:09:58.494929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.607 [2024-07-13 21:09:58.494954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:44.607 [2024-07-13 21:09:58.494971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:18:44.607 [2024-07-13 21:09:58.494984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.607 [2024-07-13 21:09:58.526197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.607 [2024-07-13 21:09:58.526240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:44.607 [2024-07-13 21:09:58.526261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.180 ms 00:18:44.607 [2024-07-13 21:09:58.526274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.866 [2024-07-13 21:09:58.557173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.866 [2024-07-13 21:09:58.557215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:44.866 [2024-07-13 21:09:58.557235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.844 ms 00:18:44.866 [2024-07-13 21:09:58.557248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.866 [2024-07-13 21:09:58.587826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.866 [2024-07-13 21:09:58.587877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:44.866 [2024-07-13 21:09:58.587898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.523 ms 00:18:44.866 [2024-07-13 21:09:58.587911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.866 [2024-07-13 21:09:58.618551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.866 [2024-07-13 21:09:58.618593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:44.866 [2024-07-13 21:09:58.618613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.517 ms 00:18:44.866 [2024-07-13 21:09:58.618626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.866 [2024-07-13 21:09:58.618683] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:44.866 [2024-07-13 21:09:58.618708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.618998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.619999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.620011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:44.867 [2024-07-13 21:09:58.620024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:44.868 [2024-07-13 21:09:58.620037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:44.868 [2024-07-13 21:09:58.620053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:44.868 [2024-07-13 21:09:58.620065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:44.868 [2024-07-13 21:09:58.620081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:44.868 [2024-07-13 21:09:58.620094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:44.868 [2024-07-13 21:09:58.620108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:44.868 [2024-07-13 21:09:58.620138] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:44.868 [2024-07-13 21:09:58.620159] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e284b7a-a9ff-453f-932a-f2e27dc3593a 00:18:44.868 [2024-07-13 21:09:58.620171] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:44.868 [2024-07-13 21:09:58.620185] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:44.868 [2024-07-13 21:09:58.620196] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:44.868 [2024-07-13 21:09:58.620210] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:44.868 [2024-07-13 21:09:58.620221] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:44.868 [2024-07-13 21:09:58.620236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:44.868 [2024-07-13 21:09:58.620248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:44.868 [2024-07-13 21:09:58.620260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:44.868 [2024-07-13 21:09:58.620271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:44.868 [2024-07-13 21:09:58.620286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.868 [2024-07-13 21:09:58.620298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:44.868 [2024-07-13 21:09:58.620313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.609 ms 00:18:44.868 [2024-07-13 21:09:58.620325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.868 [2024-07-13 21:09:58.636898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.868 [2024-07-13 21:09:58.636938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:44.868 [2024-07-13 21:09:58.636958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.505 ms 00:18:44.868 [2024-07-13 21:09:58.636972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.868 [2024-07-13 21:09:58.637214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.868 [2024-07-13 21:09:58.637236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:44.868 [2024-07-13 21:09:58.637252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:18:44.868 [2024-07-13 21:09:58.637267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.868 [2024-07-13 21:09:58.694965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.868 [2024-07-13 21:09:58.695016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.868 [2024-07-13 21:09:58.695036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.868 [2024-07-13 21:09:58.695049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.868 [2024-07-13 21:09:58.695128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.868 [2024-07-13 21:09:58.695144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.868 [2024-07-13 21:09:58.695159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.868 [2024-07-13 21:09:58.695173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.868 [2024-07-13 21:09:58.695283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.868 [2024-07-13 21:09:58.695303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.868 [2024-07-13 21:09:58.695319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.868 [2024-07-13 21:09:58.695331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.868 [2024-07-13 21:09:58.695361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.868 [2024-07-13 21:09:58.695375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.868 [2024-07-13 21:09:58.695390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.868 [2024-07-13 21:09:58.695401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.126 [2024-07-13 21:09:58.796958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.126 [2024-07-13 21:09:58.797025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.126 [2024-07-13 21:09:58.797046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.126 [2024-07-13 21:09:58.797060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.126 [2024-07-13 21:09:58.835940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.126 [2024-07-13 21:09:58.835990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.126 [2024-07-13 21:09:58.836012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.126 [2024-07-13 21:09:58.836029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.126 [2024-07-13 21:09:58.836146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.126 [2024-07-13 21:09:58.836167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.126 [2024-07-13 21:09:58.836182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.126 [2024-07-13 21:09:58.836194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.126 [2024-07-13 21:09:58.836265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.126 [2024-07-13 21:09:58.836283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.126 [2024-07-13 21:09:58.836297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.126 [2024-07-13 21:09:58.836309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.126 [2024-07-13 21:09:58.836440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.126 [2024-07-13 21:09:58.836459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.126 [2024-07-13 21:09:58.836474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.126 [2024-07-13 21:09:58.836486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.126 [2024-07-13 21:09:58.836552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.126 [2024-07-13 21:09:58.836571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:45.126 [2024-07-13 21:09:58.836586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.126 [2024-07-13 21:09:58.836597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.126 [2024-07-13 21:09:58.836650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.126 [2024-07-13 21:09:58.836669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.126 [2024-07-13 21:09:58.836683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.126 [2024-07-13 21:09:58.836695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.126 [2024-07-13 21:09:58.836756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.126 [2024-07-13 21:09:58.836773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.126 [2024-07-13 21:09:58.836788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.126 [2024-07-13 21:09:58.836800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.126 [2024-07-13 21:09:58.837003] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.233 ms, result 0 00:18:45.126 true 00:18:45.126 21:09:58 -- ftl/restore.sh@66 -- # killprocess 73436 00:18:45.126 21:09:58 -- common/autotest_common.sh@926 -- # '[' -z 73436 ']' 00:18:45.126 21:09:58 -- common/autotest_common.sh@930 -- # kill -0 73436 00:18:45.126 21:09:58 -- common/autotest_common.sh@931 -- # uname 00:18:45.126 21:09:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:45.126 21:09:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73436 00:18:45.126 killing process with pid 73436 00:18:45.126 21:09:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:45.126 21:09:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:45.126 21:09:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73436' 00:18:45.126 21:09:58 -- common/autotest_common.sh@945 -- # kill 73436 00:18:45.126 21:09:58 -- common/autotest_common.sh@950 -- # wait 73436 00:18:50.388 21:10:03 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:54.574 262144+0 records in 00:18:54.574 262144+0 records out 00:18:54.574 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.42844 s, 242 MB/s 00:18:54.574 21:10:07 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:56.478 21:10:09 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:56.478 [2024-07-13 21:10:09.992129] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:56.478 [2024-07-13 21:10:09.992312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73691 ] 00:18:56.478 [2024-07-13 21:10:10.155796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.478 [2024-07-13 21:10:10.352137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.737 [2024-07-13 21:10:10.648929] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:56.737 [2024-07-13 21:10:10.649016] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:56.998 [2024-07-13 21:10:10.802116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.998 [2024-07-13 21:10:10.802165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:56.998 [2024-07-13 21:10:10.802202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:56.998 [2024-07-13 21:10:10.802214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.998 [2024-07-13 21:10:10.802276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.998 [2024-07-13 21:10:10.802295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:56.998 [2024-07-13 21:10:10.802306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:56.998 [2024-07-13 21:10:10.802317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.998 [2024-07-13 21:10:10.802346] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:56.998 [2024-07-13 21:10:10.803225] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:56.998 [2024-07-13 21:10:10.803257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.998 [2024-07-13 21:10:10.803270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:56.998 [2024-07-13 21:10:10.803282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:18:56.998 [2024-07-13 21:10:10.803293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.998 [2024-07-13 21:10:10.804564] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:56.998 [2024-07-13 21:10:10.819493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.998 [2024-07-13 21:10:10.819533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:56.998 [2024-07-13 21:10:10.819572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.930 ms 00:18:56.998 [2024-07-13 21:10:10.819583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.998 [2024-07-13 21:10:10.819670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.998 [2024-07-13 21:10:10.819691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:56.998 [2024-07-13 21:10:10.819703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:56.998 [2024-07-13 21:10:10.819713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.998 [2024-07-13 21:10:10.824390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.998 [2024-07-13 21:10:10.824431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:56.998 [2024-07-13 21:10:10.824475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.592 ms 00:18:56.998 [2024-07-13 21:10:10.824487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.998 [2024-07-13 21:10:10.824623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.998 [2024-07-13 21:10:10.824659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:56.998 [2024-07-13 21:10:10.824672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:18:56.998 [2024-07-13 21:10:10.824683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.998 [2024-07-13 21:10:10.824752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.998 [2024-07-13 21:10:10.824775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:56.998 [2024-07-13 21:10:10.824796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:56.998 [2024-07-13 21:10:10.824812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.998 [2024-07-13 21:10:10.824924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:56.998 [2024-07-13 21:10:10.829034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.998 [2024-07-13 21:10:10.829068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:56.998 [2024-07-13 21:10:10.829100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.124 ms 00:18:56.998 [2024-07-13 21:10:10.829111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.998 [2024-07-13 21:10:10.829158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.998 [2024-07-13 21:10:10.829174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:56.998 [2024-07-13 21:10:10.829202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:56.998 [2024-07-13 21:10:10.829212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.998 [2024-07-13 21:10:10.829251] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:56.998 [2024-07-13 21:10:10.829284] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:56.998 [2024-07-13 21:10:10.829321] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:56.998 [2024-07-13 21:10:10.829339] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:56.998 [2024-07-13 21:10:10.829410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:56.998 [2024-07-13 21:10:10.829424] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:56.998 [2024-07-13 21:10:10.829437] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:56.998 [2024-07-13 21:10:10.829451] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:56.998 [2024-07-13 21:10:10.829463] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:56.998 [2024-07-13 21:10:10.829479] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:56.998 [2024-07-13 21:10:10.829489] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:56.998 [2024-07-13 21:10:10.829499] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:56.998 [2024-07-13 21:10:10.829508] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:56.998 [2024-07-13 21:10:10.829519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.998 [2024-07-13 21:10:10.829530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:56.998 [2024-07-13 21:10:10.829541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:18:56.998 [2024-07-13 21:10:10.829551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.998 [2024-07-13 21:10:10.829622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.998 [2024-07-13 21:10:10.829654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:56.998 [2024-07-13 21:10:10.829670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:56.998 [2024-07-13 21:10:10.829681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.998 [2024-07-13 21:10:10.829778] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:56.998 [2024-07-13 21:10:10.829795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:56.998 [2024-07-13 21:10:10.829807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:56.998 [2024-07-13 21:10:10.829818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.998 [2024-07-13 21:10:10.829829] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:56.998 [2024-07-13 21:10:10.829840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:56.998 [2024-07-13 21:10:10.829850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:56.998 [2024-07-13 21:10:10.829861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:56.998 [2024-07-13 21:10:10.829872] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:56.998 [2024-07-13 21:10:10.829900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:56.998 [2024-07-13 21:10:10.829932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:56.998 [2024-07-13 21:10:10.829943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:56.998 [2024-07-13 21:10:10.829953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:56.998 [2024-07-13 21:10:10.829964] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:56.998 [2024-07-13 21:10:10.829974] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:56.998 [2024-07-13 21:10:10.829985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.998 [2024-07-13 21:10:10.830012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:56.998 [2024-07-13 21:10:10.830038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:56.998 [2024-07-13 21:10:10.830048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.998 [2024-07-13 21:10:10.830058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:56.998 [2024-07-13 21:10:10.830068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:56.998 [2024-07-13 21:10:10.830092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:56.998 [2024-07-13 21:10:10.830103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:56.998 [2024-07-13 21:10:10.830114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:56.998 [2024-07-13 21:10:10.830124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:56.998 [2024-07-13 21:10:10.830134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:56.998 [2024-07-13 21:10:10.830144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:56.998 [2024-07-13 21:10:10.830154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:56.998 [2024-07-13 21:10:10.830163] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:56.998 [2024-07-13 21:10:10.830173] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:56.998 [2024-07-13 21:10:10.830183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:56.999 [2024-07-13 21:10:10.830193] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:56.999 [2024-07-13 21:10:10.830203] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:56.999 [2024-07-13 21:10:10.830213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:56.999 [2024-07-13 21:10:10.830223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:56.999 [2024-07-13 21:10:10.830232] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:56.999 [2024-07-13 21:10:10.830242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:56.999 [2024-07-13 21:10:10.830252] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:56.999 [2024-07-13 21:10:10.830262] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:56.999 [2024-07-13 21:10:10.830287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:56.999 [2024-07-13 21:10:10.830314] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:56.999 [2024-07-13 21:10:10.830340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:56.999 [2024-07-13 21:10:10.830351] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:56.999 [2024-07-13 21:10:10.830368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.999 [2024-07-13 21:10:10.830379] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:56.999 [2024-07-13 21:10:10.830408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:56.999 [2024-07-13 21:10:10.830419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:56.999 [2024-07-13 21:10:10.830431] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:56.999 [2024-07-13 21:10:10.830441] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:56.999 [2024-07-13 21:10:10.830452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:56.999 [2024-07-13 21:10:10.830464] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:56.999 [2024-07-13 21:10:10.830479] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:56.999 [2024-07-13 21:10:10.830491] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:56.999 [2024-07-13 21:10:10.830503] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:56.999 [2024-07-13 21:10:10.830515] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:56.999 [2024-07-13 21:10:10.830526] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:56.999 [2024-07-13 21:10:10.830538] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:56.999 [2024-07-13 21:10:10.830549] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:56.999 [2024-07-13 21:10:10.830561] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:56.999 [2024-07-13 21:10:10.830572] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:56.999 [2024-07-13 21:10:10.830584] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:56.999 [2024-07-13 21:10:10.830595] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:56.999 [2024-07-13 21:10:10.830606] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:56.999 [2024-07-13 21:10:10.830618] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:56.999 [2024-07-13 21:10:10.830630] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:56.999 [2024-07-13 21:10:10.830641] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:56.999 [2024-07-13 21:10:10.830654] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:56.999 [2024-07-13 21:10:10.830666] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:56.999 [2024-07-13 21:10:10.830678] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:56.999 [2024-07-13 21:10:10.830690] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:56.999 [2024-07-13 21:10:10.830701] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:56.999 [2024-07-13 21:10:10.830714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.999 [2024-07-13 21:10:10.830726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:56.999 [2024-07-13 21:10:10.830738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:18:56.999 [2024-07-13 21:10:10.830750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.999 [2024-07-13 21:10:10.848204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.999 [2024-07-13 21:10:10.848251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:56.999 [2024-07-13 21:10:10.848288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.399 ms 00:18:56.999 [2024-07-13 21:10:10.848300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.999 [2024-07-13 21:10:10.848404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.999 [2024-07-13 21:10:10.848428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:56.999 [2024-07-13 21:10:10.848442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:56.999 [2024-07-13 21:10:10.848465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.999 [2024-07-13 21:10:10.899084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.999 [2024-07-13 21:10:10.899135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:56.999 [2024-07-13 21:10:10.899170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.506 ms 00:18:56.999 [2024-07-13 21:10:10.899187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.999 [2024-07-13 21:10:10.899258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.999 [2024-07-13 21:10:10.899275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:56.999 [2024-07-13 21:10:10.899286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:56.999 [2024-07-13 21:10:10.899296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.999 [2024-07-13 21:10:10.899670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.999 [2024-07-13 21:10:10.899689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:56.999 [2024-07-13 21:10:10.899702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:18:56.999 [2024-07-13 21:10:10.899712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.999 [2024-07-13 21:10:10.899854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.999 [2024-07-13 21:10:10.899873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:56.999 [2024-07-13 21:10:10.899928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:56.999 [2024-07-13 21:10:10.899941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.999 [2024-07-13 21:10:10.916187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.999 [2024-07-13 21:10:10.916231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:56.999 [2024-07-13 21:10:10.916249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.195 ms 00:18:56.999 [2024-07-13 21:10:10.916261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:10.932707] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:57.259 [2024-07-13 21:10:10.932750] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:57.259 [2024-07-13 21:10:10.932783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:10.932795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:57.259 [2024-07-13 21:10:10.932807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.396 ms 00:18:57.259 [2024-07-13 21:10:10.932818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:10.960553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:10.960609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:57.259 [2024-07-13 21:10:10.960656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.659 ms 00:18:57.259 [2024-07-13 21:10:10.960667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:10.975719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:10.975758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:57.259 [2024-07-13 21:10:10.975791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.006 ms 00:18:57.259 [2024-07-13 21:10:10.975801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:10.990441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:10.990479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:57.259 [2024-07-13 21:10:10.990511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.566 ms 00:18:57.259 [2024-07-13 21:10:10.990522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:10.991055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:10.991091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:57.259 [2024-07-13 21:10:10.991107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:18:57.259 [2024-07-13 21:10:10.991119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:11.060869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:11.060964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:57.259 [2024-07-13 21:10:11.061002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.724 ms 00:18:57.259 [2024-07-13 21:10:11.061014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:11.074161] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:57.259 [2024-07-13 21:10:11.076669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:11.076706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:57.259 [2024-07-13 21:10:11.076723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.572 ms 00:18:57.259 [2024-07-13 21:10:11.076736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:11.076854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:11.076877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:57.259 [2024-07-13 21:10:11.076897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:57.259 [2024-07-13 21:10:11.076909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:11.077006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:11.077026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:57.259 [2024-07-13 21:10:11.077039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:57.259 [2024-07-13 21:10:11.077050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:11.078904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:11.078973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:57.259 [2024-07-13 21:10:11.079007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.824 ms 00:18:57.259 [2024-07-13 21:10:11.079041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:11.079078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:11.079094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:57.259 [2024-07-13 21:10:11.079106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:57.259 [2024-07-13 21:10:11.079117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:11.079182] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:57.259 [2024-07-13 21:10:11.079200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:11.079212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:57.259 [2024-07-13 21:10:11.079224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:57.259 [2024-07-13 21:10:11.079235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:11.110583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:11.110625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:57.259 [2024-07-13 21:10:11.110660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.314 ms 00:18:57.259 [2024-07-13 21:10:11.110672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:11.110754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.259 [2024-07-13 21:10:11.110774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:57.259 [2024-07-13 21:10:11.110787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:57.259 [2024-07-13 21:10:11.110808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.259 [2024-07-13 21:10:11.112025] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.383 ms, result 0 00:19:41.040  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (24 MBps) Copying: 96/1024 [MB] (24 MBps) Copying: 119/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (22 MBps) Copying: 166/1024 [MB] (24 MBps) Copying: 189/1024 [MB] (23 MBps) Copying: 214/1024 [MB] (24 MBps) Copying: 238/1024 [MB] (24 MBps) Copying: 263/1024 [MB] (24 MBps) Copying: 287/1024 [MB] (24 MBps) Copying: 311/1024 [MB] (23 MBps) Copying: 335/1024 [MB] (23 MBps) Copying: 359/1024 [MB] (23 MBps) Copying: 380/1024 [MB] (21 MBps) Copying: 405/1024 [MB] (24 MBps) Copying: 430/1024 [MB] (24 MBps) Copying: 455/1024 [MB] (24 MBps) Copying: 478/1024 [MB] (23 MBps) Copying: 501/1024 [MB] (22 MBps) Copying: 523/1024 [MB] (22 MBps) Copying: 547/1024 [MB] (23 MBps) Copying: 570/1024 [MB] (23 MBps) Copying: 594/1024 [MB] (24 MBps) Copying: 617/1024 [MB] (23 MBps) Copying: 641/1024 [MB] (23 MBps) Copying: 665/1024 [MB] (23 MBps) Copying: 689/1024 [MB] (24 MBps) Copying: 713/1024 [MB] (24 MBps) Copying: 736/1024 [MB] (22 MBps) Copying: 759/1024 [MB] (23 MBps) Copying: 782/1024 [MB] (22 MBps) Copying: 804/1024 [MB] (21 MBps) Copying: 826/1024 [MB] (22 MBps) Copying: 849/1024 [MB] (22 MBps) Copying: 872/1024 [MB] (23 MBps) Copying: 895/1024 [MB] (22 MBps) Copying: 917/1024 [MB] (22 MBps) Copying: 940/1024 [MB] (22 MBps) Copying: 963/1024 [MB] (22 MBps) Copying: 986/1024 [MB] (23 MBps) Copying: 1009/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-13 21:10:54.760345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.040 [2024-07-13 21:10:54.760404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:41.040 [2024-07-13 21:10:54.760427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:41.040 [2024-07-13 21:10:54.760440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.040 [2024-07-13 21:10:54.760472] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:41.040 [2024-07-13 21:10:54.763819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.040 [2024-07-13 21:10:54.763878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:41.040 [2024-07-13 21:10:54.763910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:19:41.040 [2024-07-13 21:10:54.763921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.040 [2024-07-13 21:10:54.765771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.040 [2024-07-13 21:10:54.765862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:41.040 [2024-07-13 21:10:54.765879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.816 ms 00:19:41.040 [2024-07-13 21:10:54.765890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.040 [2024-07-13 21:10:54.781909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.040 [2024-07-13 21:10:54.781945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:41.040 [2024-07-13 21:10:54.781976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.998 ms 00:19:41.040 [2024-07-13 21:10:54.781987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.040 [2024-07-13 21:10:54.787606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.040 [2024-07-13 21:10:54.787642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:41.040 [2024-07-13 21:10:54.787670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.568 ms 00:19:41.040 [2024-07-13 21:10:54.787680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.040 [2024-07-13 21:10:54.813308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.040 [2024-07-13 21:10:54.813345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:41.040 [2024-07-13 21:10:54.813375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.547 ms 00:19:41.040 [2024-07-13 21:10:54.813386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.040 [2024-07-13 21:10:54.828618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.040 [2024-07-13 21:10:54.828684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:41.040 [2024-07-13 21:10:54.828714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.194 ms 00:19:41.040 [2024-07-13 21:10:54.828725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.040 [2024-07-13 21:10:54.828920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.040 [2024-07-13 21:10:54.828941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:41.040 [2024-07-13 21:10:54.828960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:19:41.040 [2024-07-13 21:10:54.828971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.040 [2024-07-13 21:10:54.856036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.040 [2024-07-13 21:10:54.856071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:41.040 [2024-07-13 21:10:54.856101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.046 ms 00:19:41.040 [2024-07-13 21:10:54.856111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.040 [2024-07-13 21:10:54.881924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.040 [2024-07-13 21:10:54.881959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:41.040 [2024-07-13 21:10:54.881989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.776 ms 00:19:41.040 [2024-07-13 21:10:54.881999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.040 [2024-07-13 21:10:54.906697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.040 [2024-07-13 21:10:54.906732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:41.040 [2024-07-13 21:10:54.906761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.662 ms 00:19:41.040 [2024-07-13 21:10:54.906771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.040 [2024-07-13 21:10:54.935025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.040 [2024-07-13 21:10:54.935063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:41.040 [2024-07-13 21:10:54.935108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.147 ms 00:19:41.040 [2024-07-13 21:10:54.935118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.040 [2024-07-13 21:10:54.935157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:41.040 [2024-07-13 21:10:54.935178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:41.040 [2024-07-13 21:10:54.935532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.935994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:41.041 [2024-07-13 21:10:54.936328] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:41.041 [2024-07-13 21:10:54.936354] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e284b7a-a9ff-453f-932a-f2e27dc3593a 00:19:41.041 [2024-07-13 21:10:54.936366] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:41.041 [2024-07-13 21:10:54.936386] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:41.041 [2024-07-13 21:10:54.936397] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:41.041 [2024-07-13 21:10:54.936409] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:41.041 [2024-07-13 21:10:54.936419] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:41.041 [2024-07-13 21:10:54.936431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:41.041 [2024-07-13 21:10:54.936442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:41.041 [2024-07-13 21:10:54.936452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:41.041 [2024-07-13 21:10:54.936462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:41.041 [2024-07-13 21:10:54.936474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.041 [2024-07-13 21:10:54.936486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:41.041 [2024-07-13 21:10:54.936499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:19:41.041 [2024-07-13 21:10:54.936510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.041 [2024-07-13 21:10:54.952727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.041 [2024-07-13 21:10:54.952765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:41.041 [2024-07-13 21:10:54.952796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.109 ms 00:19:41.041 [2024-07-13 21:10:54.952806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.041 [2024-07-13 21:10:54.953109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.041 [2024-07-13 21:10:54.953128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:41.041 [2024-07-13 21:10:54.953157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:19:41.041 [2024-07-13 21:10:54.953168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:54.994401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.300 [2024-07-13 21:10:54.994459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.300 [2024-07-13 21:10:54.994490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.300 [2024-07-13 21:10:54.994501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:54.994555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.300 [2024-07-13 21:10:54.994569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.300 [2024-07-13 21:10:54.994580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.300 [2024-07-13 21:10:54.994590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:54.994688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.300 [2024-07-13 21:10:54.994707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.300 [2024-07-13 21:10:54.994718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.300 [2024-07-13 21:10:54.994728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:54.994748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.300 [2024-07-13 21:10:54.994766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.300 [2024-07-13 21:10:54.994800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.300 [2024-07-13 21:10:54.994814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:55.077704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.300 [2024-07-13 21:10:55.077758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.300 [2024-07-13 21:10:55.077775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.300 [2024-07-13 21:10:55.077786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:55.114149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.300 [2024-07-13 21:10:55.114186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.300 [2024-07-13 21:10:55.114216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.300 [2024-07-13 21:10:55.114226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:55.114305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.300 [2024-07-13 21:10:55.114328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:41.300 [2024-07-13 21:10:55.114339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.300 [2024-07-13 21:10:55.114349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:55.114399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.300 [2024-07-13 21:10:55.114415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:41.300 [2024-07-13 21:10:55.114425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.300 [2024-07-13 21:10:55.114435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:55.114537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.300 [2024-07-13 21:10:55.114561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:41.300 [2024-07-13 21:10:55.114572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.300 [2024-07-13 21:10:55.114581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:55.114623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.300 [2024-07-13 21:10:55.114638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:41.300 [2024-07-13 21:10:55.114649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.300 [2024-07-13 21:10:55.114658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:55.114698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.300 [2024-07-13 21:10:55.114711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:41.300 [2024-07-13 21:10:55.114727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.300 [2024-07-13 21:10:55.114736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:55.114781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.300 [2024-07-13 21:10:55.114796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:41.300 [2024-07-13 21:10:55.114806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.300 [2024-07-13 21:10:55.114815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.300 [2024-07-13 21:10:55.114998] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.624 ms, result 0 00:19:42.675 00:19:42.675 00:19:42.675 21:10:56 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:42.675 [2024-07-13 21:10:56.325364] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:42.675 [2024-07-13 21:10:56.325530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74164 ] 00:19:42.675 [2024-07-13 21:10:56.492001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.936 [2024-07-13 21:10:56.641623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.195 [2024-07-13 21:10:56.898977] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:43.195 [2024-07-13 21:10:56.899062] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:43.195 [2024-07-13 21:10:57.047648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.195 [2024-07-13 21:10:57.047692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:43.195 [2024-07-13 21:10:57.047726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:43.195 [2024-07-13 21:10:57.047736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.195 [2024-07-13 21:10:57.047797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.195 [2024-07-13 21:10:57.047814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.195 [2024-07-13 21:10:57.047825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:43.195 [2024-07-13 21:10:57.047835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.195 [2024-07-13 21:10:57.047912] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:43.195 [2024-07-13 21:10:57.048911] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:43.195 [2024-07-13 21:10:57.048948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.195 [2024-07-13 21:10:57.048961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.196 [2024-07-13 21:10:57.048972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:19:43.196 [2024-07-13 21:10:57.048981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.196 [2024-07-13 21:10:57.050106] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:43.196 [2024-07-13 21:10:57.063381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.196 [2024-07-13 21:10:57.063418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:43.196 [2024-07-13 21:10:57.063453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.277 ms 00:19:43.196 [2024-07-13 21:10:57.063463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.196 [2024-07-13 21:10:57.063521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.196 [2024-07-13 21:10:57.063537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:43.196 [2024-07-13 21:10:57.063548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:43.196 [2024-07-13 21:10:57.063556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.196 [2024-07-13 21:10:57.067801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.196 [2024-07-13 21:10:57.067862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.196 [2024-07-13 21:10:57.067909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.182 ms 00:19:43.196 [2024-07-13 21:10:57.067919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.196 [2024-07-13 21:10:57.068013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.196 [2024-07-13 21:10:57.068032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.196 [2024-07-13 21:10:57.068043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:43.196 [2024-07-13 21:10:57.068052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.196 [2024-07-13 21:10:57.068119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.196 [2024-07-13 21:10:57.068139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:43.196 [2024-07-13 21:10:57.068151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:43.196 [2024-07-13 21:10:57.068160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.196 [2024-07-13 21:10:57.068264] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.196 [2024-07-13 21:10:57.071805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.196 [2024-07-13 21:10:57.071864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.196 [2024-07-13 21:10:57.071894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.592 ms 00:19:43.196 [2024-07-13 21:10:57.071904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.196 [2024-07-13 21:10:57.071947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.196 [2024-07-13 21:10:57.071963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:43.196 [2024-07-13 21:10:57.071974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:43.196 [2024-07-13 21:10:57.071983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.196 [2024-07-13 21:10:57.072007] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:43.196 [2024-07-13 21:10:57.072036] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:43.196 [2024-07-13 21:10:57.072072] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:43.196 [2024-07-13 21:10:57.072089] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:43.196 [2024-07-13 21:10:57.072205] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:43.196 [2024-07-13 21:10:57.072220] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:43.196 [2024-07-13 21:10:57.072275] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:43.196 [2024-07-13 21:10:57.072305] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:43.196 [2024-07-13 21:10:57.072318] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:43.196 [2024-07-13 21:10:57.072335] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:43.196 [2024-07-13 21:10:57.072345] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:43.196 [2024-07-13 21:10:57.072355] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:43.196 [2024-07-13 21:10:57.072365] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:43.196 [2024-07-13 21:10:57.072377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.196 [2024-07-13 21:10:57.072387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:43.196 [2024-07-13 21:10:57.072398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:19:43.196 [2024-07-13 21:10:57.072408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.196 [2024-07-13 21:10:57.072494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.196 [2024-07-13 21:10:57.072517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:43.196 [2024-07-13 21:10:57.072534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:43.196 [2024-07-13 21:10:57.072545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.196 [2024-07-13 21:10:57.072662] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:43.196 [2024-07-13 21:10:57.072692] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:43.196 [2024-07-13 21:10:57.072704] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.196 [2024-07-13 21:10:57.072715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.196 [2024-07-13 21:10:57.072726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:43.196 [2024-07-13 21:10:57.072735] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:43.196 [2024-07-13 21:10:57.072745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:43.196 [2024-07-13 21:10:57.072755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:43.196 [2024-07-13 21:10:57.072765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:43.196 [2024-07-13 21:10:57.072775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.196 [2024-07-13 21:10:57.072798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:43.196 [2024-07-13 21:10:57.072808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:43.196 [2024-07-13 21:10:57.072817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.196 [2024-07-13 21:10:57.072826] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:43.196 [2024-07-13 21:10:57.072838] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:43.196 [2024-07-13 21:10:57.072847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.196 [2024-07-13 21:10:57.072857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:43.196 [2024-07-13 21:10:57.072866] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:43.196 [2024-07-13 21:10:57.072875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.196 [2024-07-13 21:10:57.072900] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:43.196 [2024-07-13 21:10:57.072925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:43.196 [2024-07-13 21:10:57.072947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:43.196 [2024-07-13 21:10:57.073180] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:43.196 [2024-07-13 21:10:57.073222] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:43.196 [2024-07-13 21:10:57.073355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:43.196 [2024-07-13 21:10:57.073424] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:43.196 [2024-07-13 21:10:57.073476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:43.196 [2024-07-13 21:10:57.073526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:43.196 [2024-07-13 21:10:57.073574] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:43.196 [2024-07-13 21:10:57.073620] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:43.196 [2024-07-13 21:10:57.073634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:43.196 [2024-07-13 21:10:57.073650] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:43.196 [2024-07-13 21:10:57.073660] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:43.196 [2024-07-13 21:10:57.073673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:43.196 [2024-07-13 21:10:57.073683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:43.196 [2024-07-13 21:10:57.073707] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:43.196 [2024-07-13 21:10:57.073732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.196 [2024-07-13 21:10:57.073741] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:43.196 [2024-07-13 21:10:57.073750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:43.196 [2024-07-13 21:10:57.073759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.196 [2024-07-13 21:10:57.073769] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:43.196 [2024-07-13 21:10:57.073779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:43.196 [2024-07-13 21:10:57.073790] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.196 [2024-07-13 21:10:57.073806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.196 [2024-07-13 21:10:57.073816] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:43.196 [2024-07-13 21:10:57.073826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:43.196 [2024-07-13 21:10:57.073836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:43.196 [2024-07-13 21:10:57.073846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:43.196 [2024-07-13 21:10:57.073855] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:43.196 [2024-07-13 21:10:57.073906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:43.196 [2024-07-13 21:10:57.073923] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:43.196 [2024-07-13 21:10:57.073936] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.196 [2024-07-13 21:10:57.073948] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:43.196 [2024-07-13 21:10:57.073959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:43.196 [2024-07-13 21:10:57.073969] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:43.196 [2024-07-13 21:10:57.073980] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:43.196 [2024-07-13 21:10:57.073990] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:43.196 [2024-07-13 21:10:57.074015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:43.197 [2024-07-13 21:10:57.074025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:43.197 [2024-07-13 21:10:57.074035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:43.197 [2024-07-13 21:10:57.074045] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:43.197 [2024-07-13 21:10:57.074056] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:43.197 [2024-07-13 21:10:57.074066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:43.197 [2024-07-13 21:10:57.074077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:43.197 [2024-07-13 21:10:57.074088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:43.197 [2024-07-13 21:10:57.074113] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:43.197 [2024-07-13 21:10:57.074124] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.197 [2024-07-13 21:10:57.074135] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:43.197 [2024-07-13 21:10:57.074145] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:43.197 [2024-07-13 21:10:57.074155] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:43.197 [2024-07-13 21:10:57.074165] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:43.197 [2024-07-13 21:10:57.074177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.197 [2024-07-13 21:10:57.074187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:43.197 [2024-07-13 21:10:57.074197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:19:43.197 [2024-07-13 21:10:57.074221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.197 [2024-07-13 21:10:57.089310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.197 [2024-07-13 21:10:57.089347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:43.197 [2024-07-13 21:10:57.089378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.006 ms 00:19:43.197 [2024-07-13 21:10:57.089387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.197 [2024-07-13 21:10:57.089467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.197 [2024-07-13 21:10:57.089486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:43.197 [2024-07-13 21:10:57.089497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:43.197 [2024-07-13 21:10:57.089505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.456 [2024-07-13 21:10:57.133841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.456 [2024-07-13 21:10:57.133942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.456 [2024-07-13 21:10:57.133961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.276 ms 00:19:43.456 [2024-07-13 21:10:57.133977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.456 [2024-07-13 21:10:57.134038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.456 [2024-07-13 21:10:57.134054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:43.456 [2024-07-13 21:10:57.134066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:43.456 [2024-07-13 21:10:57.134075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.456 [2024-07-13 21:10:57.134461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.456 [2024-07-13 21:10:57.134479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:43.456 [2024-07-13 21:10:57.134490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:19:43.456 [2024-07-13 21:10:57.134499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.456 [2024-07-13 21:10:57.134624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.456 [2024-07-13 21:10:57.134642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:43.456 [2024-07-13 21:10:57.134652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:43.456 [2024-07-13 21:10:57.134662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.456 [2024-07-13 21:10:57.148800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.456 [2024-07-13 21:10:57.148864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:43.456 [2024-07-13 21:10:57.148897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.115 ms 00:19:43.456 [2024-07-13 21:10:57.148907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.456 [2024-07-13 21:10:57.162393] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:43.456 [2024-07-13 21:10:57.162429] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:43.456 [2024-07-13 21:10:57.162459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.456 [2024-07-13 21:10:57.162469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:43.456 [2024-07-13 21:10:57.162480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.436 ms 00:19:43.456 [2024-07-13 21:10:57.162489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.456 [2024-07-13 21:10:57.190510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.456 [2024-07-13 21:10:57.190546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:43.456 [2024-07-13 21:10:57.190576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.982 ms 00:19:43.456 [2024-07-13 21:10:57.190585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.456 [2024-07-13 21:10:57.204135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.456 [2024-07-13 21:10:57.204170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:43.457 [2024-07-13 21:10:57.204200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.510 ms 00:19:43.457 [2024-07-13 21:10:57.204209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.457 [2024-07-13 21:10:57.217429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.457 [2024-07-13 21:10:57.217463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:43.457 [2024-07-13 21:10:57.217493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.145 ms 00:19:43.457 [2024-07-13 21:10:57.217502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.457 [2024-07-13 21:10:57.217919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.457 [2024-07-13 21:10:57.217942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:43.457 [2024-07-13 21:10:57.217969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:43.457 [2024-07-13 21:10:57.217979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.457 [2024-07-13 21:10:57.280694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.457 [2024-07-13 21:10:57.280757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:43.457 [2024-07-13 21:10:57.280792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.693 ms 00:19:43.457 [2024-07-13 21:10:57.280801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.457 [2024-07-13 21:10:57.290999] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:43.457 [2024-07-13 21:10:57.293096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.457 [2024-07-13 21:10:57.293124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:43.457 [2024-07-13 21:10:57.293153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.200 ms 00:19:43.457 [2024-07-13 21:10:57.293164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.457 [2024-07-13 21:10:57.293249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.457 [2024-07-13 21:10:57.293269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:43.457 [2024-07-13 21:10:57.293280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:43.457 [2024-07-13 21:10:57.293289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.457 [2024-07-13 21:10:57.293361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.457 [2024-07-13 21:10:57.293379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:43.457 [2024-07-13 21:10:57.293390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:43.457 [2024-07-13 21:10:57.293399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.457 [2024-07-13 21:10:57.295032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.457 [2024-07-13 21:10:57.295078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:43.457 [2024-07-13 21:10:57.295110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:19:43.457 [2024-07-13 21:10:57.295119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.457 [2024-07-13 21:10:57.295152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.457 [2024-07-13 21:10:57.295166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:43.457 [2024-07-13 21:10:57.295177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:43.457 [2024-07-13 21:10:57.295193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.457 [2024-07-13 21:10:57.295230] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:43.457 [2024-07-13 21:10:57.295245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.457 [2024-07-13 21:10:57.295254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:43.457 [2024-07-13 21:10:57.295264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:43.457 [2024-07-13 21:10:57.295276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.457 [2024-07-13 21:10:57.320384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.457 [2024-07-13 21:10:57.320423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:43.457 [2024-07-13 21:10:57.320454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.071 ms 00:19:43.457 [2024-07-13 21:10:57.320464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.457 [2024-07-13 21:10:57.320549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.457 [2024-07-13 21:10:57.320572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:43.457 [2024-07-13 21:10:57.320583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:43.457 [2024-07-13 21:10:57.320593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.457 [2024-07-13 21:10:57.321855] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 273.666 ms, result 0 00:20:29.062  Copying: 22/1024 [MB] (22 MBps) Copying: 44/1024 [MB] (22 MBps) Copying: 66/1024 [MB] (22 MBps) Copying: 89/1024 [MB] (22 MBps) Copying: 111/1024 [MB] (22 MBps) Copying: 135/1024 [MB] (23 MBps) Copying: 157/1024 [MB] (22 MBps) Copying: 180/1024 [MB] (23 MBps) Copying: 204/1024 [MB] (23 MBps) Copying: 226/1024 [MB] (22 MBps) Copying: 249/1024 [MB] (22 MBps) Copying: 271/1024 [MB] (22 MBps) Copying: 293/1024 [MB] (22 MBps) Copying: 316/1024 [MB] (22 MBps) Copying: 338/1024 [MB] (22 MBps) Copying: 360/1024 [MB] (22 MBps) Copying: 382/1024 [MB] (21 MBps) Copying: 405/1024 [MB] (22 MBps) Copying: 427/1024 [MB] (21 MBps) Copying: 448/1024 [MB] (21 MBps) Copying: 471/1024 [MB] (22 MBps) Copying: 494/1024 [MB] (23 MBps) Copying: 519/1024 [MB] (24 MBps) Copying: 542/1024 [MB] (23 MBps) Copying: 565/1024 [MB] (23 MBps) Copying: 589/1024 [MB] (23 MBps) Copying: 612/1024 [MB] (22 MBps) Copying: 635/1024 [MB] (22 MBps) Copying: 658/1024 [MB] (23 MBps) Copying: 681/1024 [MB] (22 MBps) Copying: 703/1024 [MB] (22 MBps) Copying: 727/1024 [MB] (23 MBps) Copying: 750/1024 [MB] (23 MBps) Copying: 772/1024 [MB] (22 MBps) Copying: 795/1024 [MB] (22 MBps) Copying: 818/1024 [MB] (22 MBps) Copying: 840/1024 [MB] (22 MBps) Copying: 863/1024 [MB] (22 MBps) Copying: 886/1024 [MB] (23 MBps) Copying: 909/1024 [MB] (22 MBps) Copying: 932/1024 [MB] (22 MBps) Copying: 954/1024 [MB] (22 MBps) Copying: 976/1024 [MB] (22 MBps) Copying: 998/1024 [MB] (21 MBps) Copying: 1020/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-13 21:11:42.841686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.062 [2024-07-13 21:11:42.841795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:29.062 [2024-07-13 21:11:42.841835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.062 [2024-07-13 21:11:42.841848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.062 [2024-07-13 21:11:42.841917] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.062 [2024-07-13 21:11:42.847584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.062 [2024-07-13 21:11:42.847638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:29.062 [2024-07-13 21:11:42.847659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.641 ms 00:20:29.062 [2024-07-13 21:11:42.847683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.062 [2024-07-13 21:11:42.848090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.062 [2024-07-13 21:11:42.848161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:29.062 [2024-07-13 21:11:42.848180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:20:29.062 [2024-07-13 21:11:42.848195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.062 [2024-07-13 21:11:42.852998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.062 [2024-07-13 21:11:42.853037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:29.062 [2024-07-13 21:11:42.853055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.777 ms 00:20:29.062 [2024-07-13 21:11:42.853069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.062 [2024-07-13 21:11:42.860415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.062 [2024-07-13 21:11:42.860448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:29.062 [2024-07-13 21:11:42.860463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.311 ms 00:20:29.062 [2024-07-13 21:11:42.860475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.062 [2024-07-13 21:11:42.891504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.062 [2024-07-13 21:11:42.891542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:29.062 [2024-07-13 21:11:42.891556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.924 ms 00:20:29.062 [2024-07-13 21:11:42.891565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.062 [2024-07-13 21:11:42.907764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.062 [2024-07-13 21:11:42.907801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:29.062 [2024-07-13 21:11:42.907832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.176 ms 00:20:29.062 [2024-07-13 21:11:42.907842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.062 [2024-07-13 21:11:42.908038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.062 [2024-07-13 21:11:42.908066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:29.062 [2024-07-13 21:11:42.908078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:20:29.062 [2024-07-13 21:11:42.908089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.062 [2024-07-13 21:11:42.935372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.062 [2024-07-13 21:11:42.935407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:29.062 [2024-07-13 21:11:42.935437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.236 ms 00:20:29.062 [2024-07-13 21:11:42.935446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.062 [2024-07-13 21:11:42.965963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.062 [2024-07-13 21:11:42.966013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:29.062 [2024-07-13 21:11:42.966030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.494 ms 00:20:29.062 [2024-07-13 21:11:42.966041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.322 [2024-07-13 21:11:42.996420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.322 [2024-07-13 21:11:42.996462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:29.322 [2024-07-13 21:11:42.996498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.353 ms 00:20:29.322 [2024-07-13 21:11:42.996509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.322 [2024-07-13 21:11:43.026670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.322 [2024-07-13 21:11:43.026709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:29.322 [2024-07-13 21:11:43.026757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.073 ms 00:20:29.322 [2024-07-13 21:11:43.026784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.322 [2024-07-13 21:11:43.026809] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:29.322 [2024-07-13 21:11:43.026830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:29.322 [2024-07-13 21:11:43.026844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:29.322 [2024-07-13 21:11:43.026857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:29.322 [2024-07-13 21:11:43.026869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:29.322 [2024-07-13 21:11:43.026892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:29.322 [2024-07-13 21:11:43.026906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:29.322 [2024-07-13 21:11:43.026918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:29.322 [2024-07-13 21:11:43.026940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:29.322 [2024-07-13 21:11:43.026952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:29.322 [2024-07-13 21:11:43.026964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:29.322 [2024-07-13 21:11:43.026976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:29.322 [2024-07-13 21:11:43.026988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:29.323 [2024-07-13 21:11:43.027842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.027853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.027874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.027888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.027900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.027912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.027924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.027935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.027947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.027959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.027970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.027981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.027993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.028005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.028018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.028029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.028041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:29.324 [2024-07-13 21:11:43.028061] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:29.324 [2024-07-13 21:11:43.028072] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e284b7a-a9ff-453f-932a-f2e27dc3593a 00:20:29.324 [2024-07-13 21:11:43.028092] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:29.324 [2024-07-13 21:11:43.028103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:29.324 [2024-07-13 21:11:43.028113] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:29.324 [2024-07-13 21:11:43.028125] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:29.324 [2024-07-13 21:11:43.028149] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:29.324 [2024-07-13 21:11:43.028161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:29.324 [2024-07-13 21:11:43.028171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:29.324 [2024-07-13 21:11:43.028180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:29.324 [2024-07-13 21:11:43.028190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:29.324 [2024-07-13 21:11:43.028200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.324 [2024-07-13 21:11:43.028211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:29.324 [2024-07-13 21:11:43.028222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:20:29.324 [2024-07-13 21:11:43.028244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.045530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.324 [2024-07-13 21:11:43.045563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:29.324 [2024-07-13 21:11:43.045592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.199 ms 00:20:29.324 [2024-07-13 21:11:43.045602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.045835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.324 [2024-07-13 21:11:43.045867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:29.324 [2024-07-13 21:11:43.045880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:20:29.324 [2024-07-13 21:11:43.045912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.092294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.324 [2024-07-13 21:11:43.092368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.324 [2024-07-13 21:11:43.092385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.324 [2024-07-13 21:11:43.092397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.092462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.324 [2024-07-13 21:11:43.092478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.324 [2024-07-13 21:11:43.092490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.324 [2024-07-13 21:11:43.092508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.092608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.324 [2024-07-13 21:11:43.092653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.324 [2024-07-13 21:11:43.092664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.324 [2024-07-13 21:11:43.092686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.092717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.324 [2024-07-13 21:11:43.092729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.324 [2024-07-13 21:11:43.092750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.324 [2024-07-13 21:11:43.092771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.181839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.324 [2024-07-13 21:11:43.181936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.324 [2024-07-13 21:11:43.181956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.324 [2024-07-13 21:11:43.181968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.218384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.324 [2024-07-13 21:11:43.218421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.324 [2024-07-13 21:11:43.218451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.324 [2024-07-13 21:11:43.218461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.218548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.324 [2024-07-13 21:11:43.218564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.324 [2024-07-13 21:11:43.218574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.324 [2024-07-13 21:11:43.218584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.218631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.324 [2024-07-13 21:11:43.218645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.324 [2024-07-13 21:11:43.218655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.324 [2024-07-13 21:11:43.218664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.218802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.324 [2024-07-13 21:11:43.218821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.324 [2024-07-13 21:11:43.218832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.324 [2024-07-13 21:11:43.218842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.218931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.324 [2024-07-13 21:11:43.218951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:29.324 [2024-07-13 21:11:43.218962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.324 [2024-07-13 21:11:43.218983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.219057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.324 [2024-07-13 21:11:43.219079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.324 [2024-07-13 21:11:43.219106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.324 [2024-07-13 21:11:43.219145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.219216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.324 [2024-07-13 21:11:43.219238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.324 [2024-07-13 21:11:43.219249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.324 [2024-07-13 21:11:43.219259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.324 [2024-07-13 21:11:43.219464] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.741 ms, result 0 00:20:30.701 00:20:30.701 00:20:30.701 21:11:44 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:32.614 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:32.614 21:11:46 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:32.614 [2024-07-13 21:11:46.314815] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:32.614 [2024-07-13 21:11:46.314949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74667 ] 00:20:32.614 [2024-07-13 21:11:46.471961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.873 [2024-07-13 21:11:46.675373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.132 [2024-07-13 21:11:46.921106] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:33.132 [2024-07-13 21:11:46.921175] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:33.393 [2024-07-13 21:11:47.069761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.393 [2024-07-13 21:11:47.069883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:33.393 [2024-07-13 21:11:47.069905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:33.393 [2024-07-13 21:11:47.069916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.393 [2024-07-13 21:11:47.069998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.393 [2024-07-13 21:11:47.070031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.393 [2024-07-13 21:11:47.070058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:33.393 [2024-07-13 21:11:47.070069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.393 [2024-07-13 21:11:47.070099] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:33.393 [2024-07-13 21:11:47.071130] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:33.393 [2024-07-13 21:11:47.071166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.393 [2024-07-13 21:11:47.071179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.393 [2024-07-13 21:11:47.071189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:20:33.393 [2024-07-13 21:11:47.071198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.393 [2024-07-13 21:11:47.072420] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:33.393 [2024-07-13 21:11:47.086964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.393 [2024-07-13 21:11:47.087001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:33.393 [2024-07-13 21:11:47.087021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.546 ms 00:20:33.393 [2024-07-13 21:11:47.087031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.393 [2024-07-13 21:11:47.087088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.393 [2024-07-13 21:11:47.087104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:33.393 [2024-07-13 21:11:47.087114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:33.393 [2024-07-13 21:11:47.087123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.393 [2024-07-13 21:11:47.091418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.393 [2024-07-13 21:11:47.091451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.393 [2024-07-13 21:11:47.091464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:20:33.393 [2024-07-13 21:11:47.091473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.393 [2024-07-13 21:11:47.091560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.393 [2024-07-13 21:11:47.091576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.393 [2024-07-13 21:11:47.091586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:33.393 [2024-07-13 21:11:47.091595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.393 [2024-07-13 21:11:47.091639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.393 [2024-07-13 21:11:47.091658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:33.393 [2024-07-13 21:11:47.091667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:33.393 [2024-07-13 21:11:47.091676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.393 [2024-07-13 21:11:47.091707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:33.393 [2024-07-13 21:11:47.095713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.393 [2024-07-13 21:11:47.095749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.393 [2024-07-13 21:11:47.095763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.017 ms 00:20:33.393 [2024-07-13 21:11:47.095773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.393 [2024-07-13 21:11:47.095826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.393 [2024-07-13 21:11:47.095840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:33.393 [2024-07-13 21:11:47.095882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:33.393 [2024-07-13 21:11:47.095910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.393 [2024-07-13 21:11:47.095953] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:33.393 [2024-07-13 21:11:47.095985] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:33.393 [2024-07-13 21:11:47.096022] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:33.393 [2024-07-13 21:11:47.096040] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:33.393 [2024-07-13 21:11:47.096112] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:33.393 [2024-07-13 21:11:47.096125] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:33.393 [2024-07-13 21:11:47.096137] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:33.393 [2024-07-13 21:11:47.096150] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:33.393 [2024-07-13 21:11:47.096162] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:33.393 [2024-07-13 21:11:47.096177] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:33.393 [2024-07-13 21:11:47.096187] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:33.393 [2024-07-13 21:11:47.096197] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:33.393 [2024-07-13 21:11:47.096206] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:33.393 [2024-07-13 21:11:47.096251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.393 [2024-07-13 21:11:47.096262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:33.393 [2024-07-13 21:11:47.096272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:20:33.393 [2024-07-13 21:11:47.096281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.393 [2024-07-13 21:11:47.096379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.393 [2024-07-13 21:11:47.096395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:33.393 [2024-07-13 21:11:47.096409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:33.393 [2024-07-13 21:11:47.096419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.393 [2024-07-13 21:11:47.096531] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:33.393 [2024-07-13 21:11:47.096562] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:33.393 [2024-07-13 21:11:47.096574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.393 [2024-07-13 21:11:47.096585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.393 [2024-07-13 21:11:47.096596] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:33.393 [2024-07-13 21:11:47.096607] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:33.393 [2024-07-13 21:11:47.096628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:33.393 [2024-07-13 21:11:47.096653] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:33.393 [2024-07-13 21:11:47.096663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:33.393 [2024-07-13 21:11:47.096672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.393 [2024-07-13 21:11:47.096696] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:33.393 [2024-07-13 21:11:47.096705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:33.393 [2024-07-13 21:11:47.096714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.393 [2024-07-13 21:11:47.096723] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:33.393 [2024-07-13 21:11:47.096732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:33.393 [2024-07-13 21:11:47.096742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.393 [2024-07-13 21:11:47.096752] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:33.393 [2024-07-13 21:11:47.096761] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:33.393 [2024-07-13 21:11:47.096770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.393 [2024-07-13 21:11:47.096779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:33.394 [2024-07-13 21:11:47.096788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:33.394 [2024-07-13 21:11:47.096811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:33.394 [2024-07-13 21:11:47.096820] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:33.394 [2024-07-13 21:11:47.096830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:33.394 [2024-07-13 21:11:47.096839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:33.394 [2024-07-13 21:11:47.096859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:33.394 [2024-07-13 21:11:47.096869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:33.394 [2024-07-13 21:11:47.096878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:33.394 [2024-07-13 21:11:47.096888] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:33.394 [2024-07-13 21:11:47.096898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:33.394 [2024-07-13 21:11:47.096907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:33.394 [2024-07-13 21:11:47.096916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:33.394 [2024-07-13 21:11:47.096925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:33.394 [2024-07-13 21:11:47.096934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:33.394 [2024-07-13 21:11:47.096956] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:33.394 [2024-07-13 21:11:47.096969] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:33.394 [2024-07-13 21:11:47.096979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.394 [2024-07-13 21:11:47.096988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:33.394 [2024-07-13 21:11:47.096998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:33.394 [2024-07-13 21:11:47.097007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.394 [2024-07-13 21:11:47.097016] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:33.394 [2024-07-13 21:11:47.097026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:33.394 [2024-07-13 21:11:47.097036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.394 [2024-07-13 21:11:47.097051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.394 [2024-07-13 21:11:47.097062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:33.394 [2024-07-13 21:11:47.097072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:33.394 [2024-07-13 21:11:47.097081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:33.394 [2024-07-13 21:11:47.097093] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:33.394 [2024-07-13 21:11:47.097102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:33.394 [2024-07-13 21:11:47.097112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:33.394 [2024-07-13 21:11:47.097123] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:33.394 [2024-07-13 21:11:47.097136] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.394 [2024-07-13 21:11:47.097148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:33.394 [2024-07-13 21:11:47.097158] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:33.394 [2024-07-13 21:11:47.097183] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:33.394 [2024-07-13 21:11:47.097193] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:33.394 [2024-07-13 21:11:47.097203] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:33.394 [2024-07-13 21:11:47.097213] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:33.394 [2024-07-13 21:11:47.097223] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:33.394 [2024-07-13 21:11:47.097233] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:33.394 [2024-07-13 21:11:47.097243] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:33.394 [2024-07-13 21:11:47.097253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:33.394 [2024-07-13 21:11:47.097263] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:33.394 [2024-07-13 21:11:47.097273] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:33.394 [2024-07-13 21:11:47.097285] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:33.394 [2024-07-13 21:11:47.097295] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:33.394 [2024-07-13 21:11:47.097306] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.394 [2024-07-13 21:11:47.097317] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:33.394 [2024-07-13 21:11:47.097328] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:33.394 [2024-07-13 21:11:47.097338] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:33.394 [2024-07-13 21:11:47.097349] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:33.394 [2024-07-13 21:11:47.097360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.097370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:33.394 [2024-07-13 21:11:47.097381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:20:33.394 [2024-07-13 21:11:47.097390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.115425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.115467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.394 [2024-07-13 21:11:47.115483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.963 ms 00:20:33.394 [2024-07-13 21:11:47.115493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.115588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.115609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:33.394 [2024-07-13 21:11:47.115619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:33.394 [2024-07-13 21:11:47.115628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.154944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.154988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.394 [2024-07-13 21:11:47.155004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.249 ms 00:20:33.394 [2024-07-13 21:11:47.155018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.155072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.155086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.394 [2024-07-13 21:11:47.155096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:33.394 [2024-07-13 21:11:47.155104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.155436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.155452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.394 [2024-07-13 21:11:47.155463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:33.394 [2024-07-13 21:11:47.155471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.155588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.155603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.394 [2024-07-13 21:11:47.155612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:33.394 [2024-07-13 21:11:47.155621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.169490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.169525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.394 [2024-07-13 21:11:47.169539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.846 ms 00:20:33.394 [2024-07-13 21:11:47.169549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.182900] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:33.394 [2024-07-13 21:11:47.182952] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:33.394 [2024-07-13 21:11:47.182985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.182995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:33.394 [2024-07-13 21:11:47.183006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.326 ms 00:20:33.394 [2024-07-13 21:11:47.183015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.207259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.207296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:33.394 [2024-07-13 21:11:47.207311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.203 ms 00:20:33.394 [2024-07-13 21:11:47.207320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.219702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.219737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:33.394 [2024-07-13 21:11:47.219751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.341 ms 00:20:33.394 [2024-07-13 21:11:47.219759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.231955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.231989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:33.394 [2024-07-13 21:11:47.232003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.158 ms 00:20:33.394 [2024-07-13 21:11:47.232011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.232407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.232428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:33.394 [2024-07-13 21:11:47.232439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:20:33.394 [2024-07-13 21:11:47.232449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.394 [2024-07-13 21:11:47.290813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.394 [2024-07-13 21:11:47.290884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:33.394 [2024-07-13 21:11:47.290901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.343 ms 00:20:33.394 [2024-07-13 21:11:47.290911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.395 [2024-07-13 21:11:47.300651] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:33.395 [2024-07-13 21:11:47.302574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.395 [2024-07-13 21:11:47.302602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:33.395 [2024-07-13 21:11:47.302616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.609 ms 00:20:33.395 [2024-07-13 21:11:47.302625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.395 [2024-07-13 21:11:47.302701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.395 [2024-07-13 21:11:47.302719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:33.395 [2024-07-13 21:11:47.302730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:33.395 [2024-07-13 21:11:47.302738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.395 [2024-07-13 21:11:47.302807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.395 [2024-07-13 21:11:47.302822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:33.395 [2024-07-13 21:11:47.302832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:33.395 [2024-07-13 21:11:47.302874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.395 [2024-07-13 21:11:47.304554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.395 [2024-07-13 21:11:47.304586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:33.395 [2024-07-13 21:11:47.304630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.658 ms 00:20:33.395 [2024-07-13 21:11:47.304653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.395 [2024-07-13 21:11:47.304684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.395 [2024-07-13 21:11:47.304696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:33.395 [2024-07-13 21:11:47.304706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:33.395 [2024-07-13 21:11:47.304722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.395 [2024-07-13 21:11:47.304757] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:33.395 [2024-07-13 21:11:47.304771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.395 [2024-07-13 21:11:47.304780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:33.395 [2024-07-13 21:11:47.304789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:33.395 [2024-07-13 21:11:47.304801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.653 [2024-07-13 21:11:47.330240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.653 [2024-07-13 21:11:47.330276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:33.653 [2024-07-13 21:11:47.330291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.419 ms 00:20:33.653 [2024-07-13 21:11:47.330300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.653 [2024-07-13 21:11:47.330367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.653 [2024-07-13 21:11:47.330388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:33.653 [2024-07-13 21:11:47.330398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:33.653 [2024-07-13 21:11:47.330407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.653 [2024-07-13 21:11:47.331593] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 261.332 ms, result 0 00:21:18.014  Copying: 22/1024 [MB] (22 MBps) Copying: 45/1024 [MB] (23 MBps) Copying: 68/1024 [MB] (23 MBps) Copying: 90/1024 [MB] (22 MBps) Copying: 114/1024 [MB] (23 MBps) Copying: 138/1024 [MB] (23 MBps) Copying: 161/1024 [MB] (23 MBps) Copying: 184/1024 [MB] (23 MBps) Copying: 208/1024 [MB] (23 MBps) Copying: 232/1024 [MB] (23 MBps) Copying: 256/1024 [MB] (24 MBps) Copying: 280/1024 [MB] (24 MBps) Copying: 303/1024 [MB] (23 MBps) Copying: 326/1024 [MB] (22 MBps) Copying: 349/1024 [MB] (22 MBps) Copying: 373/1024 [MB] (23 MBps) Copying: 396/1024 [MB] (23 MBps) Copying: 419/1024 [MB] (23 MBps) Copying: 442/1024 [MB] (23 MBps) Copying: 465/1024 [MB] (23 MBps) Copying: 489/1024 [MB] (23 MBps) Copying: 512/1024 [MB] (23 MBps) Copying: 536/1024 [MB] (23 MBps) Copying: 559/1024 [MB] (23 MBps) Copying: 583/1024 [MB] (23 MBps) Copying: 607/1024 [MB] (23 MBps) Copying: 631/1024 [MB] (23 MBps) Copying: 655/1024 [MB] (23 MBps) Copying: 679/1024 [MB] (24 MBps) Copying: 702/1024 [MB] (23 MBps) Copying: 726/1024 [MB] (23 MBps) Copying: 750/1024 [MB] (23 MBps) Copying: 774/1024 [MB] (23 MBps) Copying: 798/1024 [MB] (23 MBps) Copying: 822/1024 [MB] (24 MBps) Copying: 846/1024 [MB] (24 MBps) Copying: 870/1024 [MB] (23 MBps) Copying: 894/1024 [MB] (23 MBps) Copying: 918/1024 [MB] (23 MBps) Copying: 942/1024 [MB] (24 MBps) Copying: 966/1024 [MB] (23 MBps) Copying: 989/1024 [MB] (23 MBps) Copying: 1013/1024 [MB] (23 MBps) Copying: 1048116/1048576 [kB] (10240 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-13 21:12:31.839757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.014 [2024-07-13 21:12:31.839907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:18.014 [2024-07-13 21:12:31.839946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:18.014 [2024-07-13 21:12:31.840002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.014 [2024-07-13 21:12:31.842080] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:18.014 [2024-07-13 21:12:31.847028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.014 [2024-07-13 21:12:31.847066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:18.014 [2024-07-13 21:12:31.847080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.889 ms 00:21:18.014 [2024-07-13 21:12:31.847090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.014 [2024-07-13 21:12:31.859530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.014 [2024-07-13 21:12:31.859583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:18.014 [2024-07-13 21:12:31.859599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.070 ms 00:21:18.014 [2024-07-13 21:12:31.859609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.014 [2024-07-13 21:12:31.879862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.014 [2024-07-13 21:12:31.879944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:18.014 [2024-07-13 21:12:31.879979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.226 ms 00:21:18.014 [2024-07-13 21:12:31.879991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.014 [2024-07-13 21:12:31.885694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.014 [2024-07-13 21:12:31.885739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:18.014 [2024-07-13 21:12:31.885767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.664 ms 00:21:18.014 [2024-07-13 21:12:31.885776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.014 [2024-07-13 21:12:31.911301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.014 [2024-07-13 21:12:31.911355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:18.014 [2024-07-13 21:12:31.911386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.471 ms 00:21:18.014 [2024-07-13 21:12:31.911395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.014 [2024-07-13 21:12:31.926642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.014 [2024-07-13 21:12:31.926699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:18.014 [2024-07-13 21:12:31.926730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.209 ms 00:21:18.014 [2024-07-13 21:12:31.926740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.274 [2024-07-13 21:12:32.034961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.274 [2024-07-13 21:12:32.035019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:18.274 [2024-07-13 21:12:32.035036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.179 ms 00:21:18.274 [2024-07-13 21:12:32.035047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.274 [2024-07-13 21:12:32.062306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.274 [2024-07-13 21:12:32.062340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:18.274 [2024-07-13 21:12:32.062369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.238 ms 00:21:18.274 [2024-07-13 21:12:32.062378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.274 [2024-07-13 21:12:32.086440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.274 [2024-07-13 21:12:32.086474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:18.274 [2024-07-13 21:12:32.086504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.025 ms 00:21:18.274 [2024-07-13 21:12:32.086513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.274 [2024-07-13 21:12:32.110252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.274 [2024-07-13 21:12:32.110295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:18.274 [2024-07-13 21:12:32.110325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.702 ms 00:21:18.274 [2024-07-13 21:12:32.110334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.274 [2024-07-13 21:12:32.134042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.274 [2024-07-13 21:12:32.134092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:18.274 [2024-07-13 21:12:32.134122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.632 ms 00:21:18.274 [2024-07-13 21:12:32.134131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.274 [2024-07-13 21:12:32.134168] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:18.274 [2024-07-13 21:12:32.134188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120832 / 261120 wr_cnt: 1 state: open 00:21:18.274 [2024-07-13 21:12:32.134201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:18.274 [2024-07-13 21:12:32.134667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.134996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:18.275 [2024-07-13 21:12:32.135277] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:18.275 [2024-07-13 21:12:32.135287] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e284b7a-a9ff-453f-932a-f2e27dc3593a 00:21:18.275 [2024-07-13 21:12:32.135297] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120832 00:21:18.275 [2024-07-13 21:12:32.135306] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121792 00:21:18.275 [2024-07-13 21:12:32.135315] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120832 00:21:18.275 [2024-07-13 21:12:32.135325] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:21:18.275 [2024-07-13 21:12:32.135334] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:18.275 [2024-07-13 21:12:32.135349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:18.275 [2024-07-13 21:12:32.135358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:18.275 [2024-07-13 21:12:32.135367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:18.275 [2024-07-13 21:12:32.135375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:18.275 [2024-07-13 21:12:32.135385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.275 [2024-07-13 21:12:32.135395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:18.275 [2024-07-13 21:12:32.135405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:21:18.275 [2024-07-13 21:12:32.135424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.275 [2024-07-13 21:12:32.148960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.275 [2024-07-13 21:12:32.148992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:18.275 [2024-07-13 21:12:32.149021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.500 ms 00:21:18.275 [2024-07-13 21:12:32.149037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.275 [2024-07-13 21:12:32.149265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.275 [2024-07-13 21:12:32.149285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:18.275 [2024-07-13 21:12:32.149296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:21:18.275 [2024-07-13 21:12:32.149306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.275 [2024-07-13 21:12:32.184473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.275 [2024-07-13 21:12:32.184527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:18.275 [2024-07-13 21:12:32.184563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.275 [2024-07-13 21:12:32.184573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.275 [2024-07-13 21:12:32.184624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.275 [2024-07-13 21:12:32.184637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:18.275 [2024-07-13 21:12:32.184647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.275 [2024-07-13 21:12:32.184657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.275 [2024-07-13 21:12:32.184762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.275 [2024-07-13 21:12:32.184794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:18.275 [2024-07-13 21:12:32.184820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.275 [2024-07-13 21:12:32.184835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.275 [2024-07-13 21:12:32.184855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.275 [2024-07-13 21:12:32.184868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:18.275 [2024-07-13 21:12:32.184877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.275 [2024-07-13 21:12:32.184887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.535 [2024-07-13 21:12:32.260508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.535 [2024-07-13 21:12:32.260577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:18.535 [2024-07-13 21:12:32.260614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.535 [2024-07-13 21:12:32.260624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.535 [2024-07-13 21:12:32.290435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.535 [2024-07-13 21:12:32.290468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:18.535 [2024-07-13 21:12:32.290496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.535 [2024-07-13 21:12:32.290506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.535 [2024-07-13 21:12:32.290573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.535 [2024-07-13 21:12:32.290589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:18.535 [2024-07-13 21:12:32.290599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.535 [2024-07-13 21:12:32.290608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.535 [2024-07-13 21:12:32.290662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.535 [2024-07-13 21:12:32.290676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:18.535 [2024-07-13 21:12:32.290685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.535 [2024-07-13 21:12:32.290694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.535 [2024-07-13 21:12:32.290832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.535 [2024-07-13 21:12:32.290849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:18.535 [2024-07-13 21:12:32.290860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.535 [2024-07-13 21:12:32.290869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.535 [2024-07-13 21:12:32.290957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.535 [2024-07-13 21:12:32.290975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:18.535 [2024-07-13 21:12:32.290987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.535 [2024-07-13 21:12:32.290997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.535 [2024-07-13 21:12:32.291037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.535 [2024-07-13 21:12:32.291050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:18.535 [2024-07-13 21:12:32.291061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.535 [2024-07-13 21:12:32.291070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.535 [2024-07-13 21:12:32.291124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.535 [2024-07-13 21:12:32.291140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:18.535 [2024-07-13 21:12:32.291152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.535 [2024-07-13 21:12:32.291161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.535 [2024-07-13 21:12:32.291309] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 452.008 ms, result 0 00:21:20.437 00:21:20.437 00:21:20.437 21:12:33 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:20.437 [2024-07-13 21:12:33.992712] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:20.437 [2024-07-13 21:12:33.992890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75139 ] 00:21:20.438 [2024-07-13 21:12:34.159255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.438 [2024-07-13 21:12:34.308798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.697 [2024-07-13 21:12:34.557920] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:20.697 [2024-07-13 21:12:34.558004] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:20.958 [2024-07-13 21:12:34.706875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.958 [2024-07-13 21:12:34.706916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:20.958 [2024-07-13 21:12:34.706950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:20.958 [2024-07-13 21:12:34.706960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.958 [2024-07-13 21:12:34.707016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.958 [2024-07-13 21:12:34.707033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:20.958 [2024-07-13 21:12:34.707044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:20.958 [2024-07-13 21:12:34.707053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.958 [2024-07-13 21:12:34.707081] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:20.958 [2024-07-13 21:12:34.707928] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:20.958 [2024-07-13 21:12:34.707980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.958 [2024-07-13 21:12:34.707993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:20.958 [2024-07-13 21:12:34.708005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:21:20.958 [2024-07-13 21:12:34.708014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.958 [2024-07-13 21:12:34.709263] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:20.958 [2024-07-13 21:12:34.722122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.958 [2024-07-13 21:12:34.722158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:20.958 [2024-07-13 21:12:34.722193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.860 ms 00:21:20.958 [2024-07-13 21:12:34.722203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.958 [2024-07-13 21:12:34.722262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.958 [2024-07-13 21:12:34.722280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:20.958 [2024-07-13 21:12:34.722290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:20.958 [2024-07-13 21:12:34.722299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.958 [2024-07-13 21:12:34.726638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.958 [2024-07-13 21:12:34.726687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:20.958 [2024-07-13 21:12:34.726716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.265 ms 00:21:20.958 [2024-07-13 21:12:34.726726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.958 [2024-07-13 21:12:34.726817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.958 [2024-07-13 21:12:34.726850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:20.958 [2024-07-13 21:12:34.726860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:20.958 [2024-07-13 21:12:34.726883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.958 [2024-07-13 21:12:34.726941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.958 [2024-07-13 21:12:34.726993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:20.958 [2024-07-13 21:12:34.727005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:20.958 [2024-07-13 21:12:34.727015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.958 [2024-07-13 21:12:34.727047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:20.958 [2024-07-13 21:12:34.730636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.958 [2024-07-13 21:12:34.730667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:20.958 [2024-07-13 21:12:34.730695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.600 ms 00:21:20.958 [2024-07-13 21:12:34.730704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.958 [2024-07-13 21:12:34.730746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.958 [2024-07-13 21:12:34.730760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:20.958 [2024-07-13 21:12:34.730771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:20.958 [2024-07-13 21:12:34.730780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.958 [2024-07-13 21:12:34.730806] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:20.958 [2024-07-13 21:12:34.730829] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:20.958 [2024-07-13 21:12:34.730876] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:20.959 [2024-07-13 21:12:34.730896] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:20.959 [2024-07-13 21:12:34.731000] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:20.959 [2024-07-13 21:12:34.731014] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:20.959 [2024-07-13 21:12:34.731026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:20.959 [2024-07-13 21:12:34.731039] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:20.959 [2024-07-13 21:12:34.731055] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:20.959 [2024-07-13 21:12:34.731066] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:20.959 [2024-07-13 21:12:34.731076] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:20.959 [2024-07-13 21:12:34.731085] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:20.959 [2024-07-13 21:12:34.731094] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:20.959 [2024-07-13 21:12:34.731104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.959 [2024-07-13 21:12:34.731114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:20.959 [2024-07-13 21:12:34.731124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:21:20.959 [2024-07-13 21:12:34.731134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.959 [2024-07-13 21:12:34.731211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.959 [2024-07-13 21:12:34.731229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:20.959 [2024-07-13 21:12:34.731240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:20.959 [2024-07-13 21:12:34.731249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.959 [2024-07-13 21:12:34.731343] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:20.959 [2024-07-13 21:12:34.731357] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:20.959 [2024-07-13 21:12:34.731369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.959 [2024-07-13 21:12:34.731379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.959 [2024-07-13 21:12:34.731389] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:20.959 [2024-07-13 21:12:34.731398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:20.959 [2024-07-13 21:12:34.731407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:20.959 [2024-07-13 21:12:34.731417] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:20.959 [2024-07-13 21:12:34.731426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:20.959 [2024-07-13 21:12:34.731435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.959 [2024-07-13 21:12:34.731444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:20.959 [2024-07-13 21:12:34.731454] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:20.959 [2024-07-13 21:12:34.731463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.959 [2024-07-13 21:12:34.731472] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:20.959 [2024-07-13 21:12:34.731481] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:20.959 [2024-07-13 21:12:34.731490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.959 [2024-07-13 21:12:34.731499] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:20.959 [2024-07-13 21:12:34.731508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:20.959 [2024-07-13 21:12:34.731517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.959 [2024-07-13 21:12:34.731526] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:20.959 [2024-07-13 21:12:34.731535] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:20.959 [2024-07-13 21:12:34.731557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:20.959 [2024-07-13 21:12:34.731566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:20.959 [2024-07-13 21:12:34.731576] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:20.959 [2024-07-13 21:12:34.731585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:20.959 [2024-07-13 21:12:34.731594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:20.959 [2024-07-13 21:12:34.731604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:20.959 [2024-07-13 21:12:34.731613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:20.959 [2024-07-13 21:12:34.731622] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:20.959 [2024-07-13 21:12:34.731631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:20.959 [2024-07-13 21:12:34.731639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:20.959 [2024-07-13 21:12:34.731648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:20.959 [2024-07-13 21:12:34.731657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:20.959 [2024-07-13 21:12:34.731666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:20.959 [2024-07-13 21:12:34.731675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:20.959 [2024-07-13 21:12:34.731684] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:20.959 [2024-07-13 21:12:34.731694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.959 [2024-07-13 21:12:34.731703] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:20.959 [2024-07-13 21:12:34.731712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:20.959 [2024-07-13 21:12:34.731721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.959 [2024-07-13 21:12:34.731729] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:20.959 [2024-07-13 21:12:34.731739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:20.959 [2024-07-13 21:12:34.731750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.959 [2024-07-13 21:12:34.731763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.959 [2024-07-13 21:12:34.731775] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:20.959 [2024-07-13 21:12:34.731784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:20.959 [2024-07-13 21:12:34.731793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:20.959 [2024-07-13 21:12:34.731803] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:20.959 [2024-07-13 21:12:34.731812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:20.959 [2024-07-13 21:12:34.731821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:20.959 [2024-07-13 21:12:34.731831] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:20.959 [2024-07-13 21:12:34.731844] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.959 [2024-07-13 21:12:34.731855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:20.959 [2024-07-13 21:12:34.731865] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:20.959 [2024-07-13 21:12:34.731875] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:20.959 [2024-07-13 21:12:34.731900] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:20.959 [2024-07-13 21:12:34.731912] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:20.959 [2024-07-13 21:12:34.731921] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:20.959 [2024-07-13 21:12:34.731931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:20.959 [2024-07-13 21:12:34.731946] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:20.959 [2024-07-13 21:12:34.731960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:20.959 [2024-07-13 21:12:34.731970] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:20.959 [2024-07-13 21:12:34.731979] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:20.959 [2024-07-13 21:12:34.731990] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:20.959 [2024-07-13 21:12:34.732000] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:20.959 [2024-07-13 21:12:34.732010] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:20.959 [2024-07-13 21:12:34.732028] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.959 [2024-07-13 21:12:34.732039] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:20.959 [2024-07-13 21:12:34.732053] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:20.959 [2024-07-13 21:12:34.732068] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:20.959 [2024-07-13 21:12:34.732078] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:20.959 [2024-07-13 21:12:34.732089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.959 [2024-07-13 21:12:34.732106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:20.959 [2024-07-13 21:12:34.732120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:21:20.959 [2024-07-13 21:12:34.732131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.959 [2024-07-13 21:12:34.747205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.959 [2024-07-13 21:12:34.747242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:20.959 [2024-07-13 21:12:34.747272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.013 ms 00:21:20.959 [2024-07-13 21:12:34.747282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.960 [2024-07-13 21:12:34.747361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.960 [2024-07-13 21:12:34.747380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:20.960 [2024-07-13 21:12:34.747390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:20.960 [2024-07-13 21:12:34.747400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.960 [2024-07-13 21:12:34.787134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.960 [2024-07-13 21:12:34.787176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:20.960 [2024-07-13 21:12:34.787207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.675 ms 00:21:20.960 [2024-07-13 21:12:34.787221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.960 [2024-07-13 21:12:34.787269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.960 [2024-07-13 21:12:34.787284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:20.960 [2024-07-13 21:12:34.787295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:20.960 [2024-07-13 21:12:34.787304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.960 [2024-07-13 21:12:34.787668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.960 [2024-07-13 21:12:34.787700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:20.960 [2024-07-13 21:12:34.787714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:21:20.960 [2024-07-13 21:12:34.787724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.960 [2024-07-13 21:12:34.787867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.960 [2024-07-13 21:12:34.787885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:20.960 [2024-07-13 21:12:34.787896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:21:20.960 [2024-07-13 21:12:34.787905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.960 [2024-07-13 21:12:34.801726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.960 [2024-07-13 21:12:34.801761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:20.960 [2024-07-13 21:12:34.801792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.796 ms 00:21:20.960 [2024-07-13 21:12:34.801802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.960 [2024-07-13 21:12:34.814753] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:20.960 [2024-07-13 21:12:34.814820] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:20.960 [2024-07-13 21:12:34.814867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.960 [2024-07-13 21:12:34.814880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:20.960 [2024-07-13 21:12:34.814892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.941 ms 00:21:20.960 [2024-07-13 21:12:34.814902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.960 [2024-07-13 21:12:34.841057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.960 [2024-07-13 21:12:34.841108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:20.960 [2024-07-13 21:12:34.841140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.113 ms 00:21:20.960 [2024-07-13 21:12:34.841150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.960 [2024-07-13 21:12:34.853537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.960 [2024-07-13 21:12:34.853570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:20.960 [2024-07-13 21:12:34.853599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.308 ms 00:21:20.960 [2024-07-13 21:12:34.853609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.960 [2024-07-13 21:12:34.865691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.960 [2024-07-13 21:12:34.865725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:20.960 [2024-07-13 21:12:34.865754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.045 ms 00:21:20.960 [2024-07-13 21:12:34.865763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.960 [2024-07-13 21:12:34.866198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.960 [2024-07-13 21:12:34.866235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:20.960 [2024-07-13 21:12:34.866249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:21:20.960 [2024-07-13 21:12:34.866259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.220 [2024-07-13 21:12:34.925822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.220 [2024-07-13 21:12:34.925893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:21.220 [2024-07-13 21:12:34.925925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.539 ms 00:21:21.220 [2024-07-13 21:12:34.925936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.220 [2024-07-13 21:12:34.935849] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:21.220 [2024-07-13 21:12:34.937828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.220 [2024-07-13 21:12:34.937865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:21.220 [2024-07-13 21:12:34.937895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.834 ms 00:21:21.220 [2024-07-13 21:12:34.937904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.220 [2024-07-13 21:12:34.937988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.220 [2024-07-13 21:12:34.938005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:21.220 [2024-07-13 21:12:34.938017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:21.220 [2024-07-13 21:12:34.938026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.220 [2024-07-13 21:12:34.939045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.220 [2024-07-13 21:12:34.939092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:21.220 [2024-07-13 21:12:34.939137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:21:21.220 [2024-07-13 21:12:34.939146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.220 [2024-07-13 21:12:34.940701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.220 [2024-07-13 21:12:34.940734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:21.220 [2024-07-13 21:12:34.940761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:21:21.220 [2024-07-13 21:12:34.940770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.220 [2024-07-13 21:12:34.940803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.220 [2024-07-13 21:12:34.940816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:21.220 [2024-07-13 21:12:34.940833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:21.220 [2024-07-13 21:12:34.940871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.220 [2024-07-13 21:12:34.940915] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:21.220 [2024-07-13 21:12:34.940930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.220 [2024-07-13 21:12:34.940940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:21.220 [2024-07-13 21:12:34.940953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:21.220 [2024-07-13 21:12:34.940962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.220 [2024-07-13 21:12:34.965411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.220 [2024-07-13 21:12:34.965447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:21.220 [2024-07-13 21:12:34.965477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.424 ms 00:21:21.220 [2024-07-13 21:12:34.965487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.220 [2024-07-13 21:12:34.965555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.220 [2024-07-13 21:12:34.965578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:21.220 [2024-07-13 21:12:34.965590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:21.220 [2024-07-13 21:12:34.965599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.220 [2024-07-13 21:12:34.972224] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.178 ms, result 0 00:22:05.633  Copying: 22/1024 [MB] (22 MBps) Copying: 45/1024 [MB] (23 MBps) Copying: 68/1024 [MB] (23 MBps) Copying: 91/1024 [MB] (22 MBps) Copying: 114/1024 [MB] (23 MBps) Copying: 137/1024 [MB] (22 MBps) Copying: 160/1024 [MB] (22 MBps) Copying: 183/1024 [MB] (23 MBps) Copying: 207/1024 [MB] (23 MBps) Copying: 230/1024 [MB] (23 MBps) Copying: 253/1024 [MB] (23 MBps) Copying: 276/1024 [MB] (22 MBps) Copying: 300/1024 [MB] (23 MBps) Copying: 324/1024 [MB] (23 MBps) Copying: 347/1024 [MB] (23 MBps) Copying: 369/1024 [MB] (22 MBps) Copying: 392/1024 [MB] (22 MBps) Copying: 414/1024 [MB] (22 MBps) Copying: 437/1024 [MB] (22 MBps) Copying: 461/1024 [MB] (23 MBps) Copying: 484/1024 [MB] (22 MBps) Copying: 507/1024 [MB] (23 MBps) Copying: 530/1024 [MB] (23 MBps) Copying: 553/1024 [MB] (23 MBps) Copying: 577/1024 [MB] (23 MBps) Copying: 600/1024 [MB] (23 MBps) Copying: 623/1024 [MB] (23 MBps) Copying: 646/1024 [MB] (23 MBps) Copying: 670/1024 [MB] (23 MBps) Copying: 693/1024 [MB] (23 MBps) Copying: 718/1024 [MB] (24 MBps) Copying: 741/1024 [MB] (23 MBps) Copying: 764/1024 [MB] (23 MBps) Copying: 787/1024 [MB] (22 MBps) Copying: 810/1024 [MB] (23 MBps) Copying: 833/1024 [MB] (22 MBps) Copying: 857/1024 [MB] (23 MBps) Copying: 880/1024 [MB] (23 MBps) Copying: 904/1024 [MB] (23 MBps) Copying: 927/1024 [MB] (23 MBps) Copying: 951/1024 [MB] (23 MBps) Copying: 974/1024 [MB] (23 MBps) Copying: 998/1024 [MB] (23 MBps) Copying: 1022/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-13 21:13:19.387295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.633 [2024-07-13 21:13:19.387391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:05.633 [2024-07-13 21:13:19.387437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:05.633 [2024-07-13 21:13:19.387468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.633 [2024-07-13 21:13:19.387500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:05.633 [2024-07-13 21:13:19.393748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.633 [2024-07-13 21:13:19.393798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:05.633 [2024-07-13 21:13:19.393818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.225 ms 00:22:05.633 [2024-07-13 21:13:19.393833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.633 [2024-07-13 21:13:19.394216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.633 [2024-07-13 21:13:19.394258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:05.633 [2024-07-13 21:13:19.394277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:22:05.633 [2024-07-13 21:13:19.394298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.633 [2024-07-13 21:13:19.399934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.634 [2024-07-13 21:13:19.399988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:05.634 [2024-07-13 21:13:19.400009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.607 ms 00:22:05.634 [2024-07-13 21:13:19.400023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.634 [2024-07-13 21:13:19.405602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.634 [2024-07-13 21:13:19.405631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:05.634 [2024-07-13 21:13:19.405659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.527 ms 00:22:05.634 [2024-07-13 21:13:19.405669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.634 [2024-07-13 21:13:19.430482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.634 [2024-07-13 21:13:19.430535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:05.634 [2024-07-13 21:13:19.430550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.759 ms 00:22:05.634 [2024-07-13 21:13:19.430559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.634 [2024-07-13 21:13:19.445410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.634 [2024-07-13 21:13:19.445452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:05.634 [2024-07-13 21:13:19.445481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.812 ms 00:22:05.634 [2024-07-13 21:13:19.445491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.894 [2024-07-13 21:13:19.568997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.894 [2024-07-13 21:13:19.569054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:05.894 [2024-07-13 21:13:19.569086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 123.480 ms 00:22:05.894 [2024-07-13 21:13:19.569111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.894 [2024-07-13 21:13:19.593964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.894 [2024-07-13 21:13:19.594000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:05.894 [2024-07-13 21:13:19.594029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.832 ms 00:22:05.894 [2024-07-13 21:13:19.594038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.894 [2024-07-13 21:13:19.618475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.894 [2024-07-13 21:13:19.618510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:05.894 [2024-07-13 21:13:19.618539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.399 ms 00:22:05.894 [2024-07-13 21:13:19.618548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.894 [2024-07-13 21:13:19.642609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.894 [2024-07-13 21:13:19.642644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:05.894 [2024-07-13 21:13:19.642673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.024 ms 00:22:05.894 [2024-07-13 21:13:19.642682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.894 [2024-07-13 21:13:19.666830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.894 [2024-07-13 21:13:19.666886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:05.894 [2024-07-13 21:13:19.666916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.073 ms 00:22:05.894 [2024-07-13 21:13:19.666926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.894 [2024-07-13 21:13:19.666963] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:05.894 [2024-07-13 21:13:19.666984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:22:05.894 [2024-07-13 21:13:19.666996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:05.894 [2024-07-13 21:13:19.667374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.667991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.668001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.668011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.668022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:05.895 [2024-07-13 21:13:19.668039] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:05.895 [2024-07-13 21:13:19.668049] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e284b7a-a9ff-453f-932a-f2e27dc3593a 00:22:05.895 [2024-07-13 21:13:19.668060] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:22:05.895 [2024-07-13 21:13:19.668070] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13760 00:22:05.895 [2024-07-13 21:13:19.668079] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12800 00:22:05.895 [2024-07-13 21:13:19.668089] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0750 00:22:05.895 [2024-07-13 21:13:19.668099] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:05.895 [2024-07-13 21:13:19.668115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:05.895 [2024-07-13 21:13:19.668125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:05.895 [2024-07-13 21:13:19.668134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:05.895 [2024-07-13 21:13:19.668143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:05.895 [2024-07-13 21:13:19.668153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.895 [2024-07-13 21:13:19.668163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:05.895 [2024-07-13 21:13:19.668173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.191 ms 00:22:05.895 [2024-07-13 21:13:19.668183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.895 [2024-07-13 21:13:19.681860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.895 [2024-07-13 21:13:19.681891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:05.895 [2024-07-13 21:13:19.681920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.631 ms 00:22:05.895 [2024-07-13 21:13:19.681935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.895 [2024-07-13 21:13:19.682211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.895 [2024-07-13 21:13:19.682236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:05.895 [2024-07-13 21:13:19.682249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:22:05.895 [2024-07-13 21:13:19.682259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.895 [2024-07-13 21:13:19.717634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.895 [2024-07-13 21:13:19.717677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:05.895 [2024-07-13 21:13:19.717707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.895 [2024-07-13 21:13:19.717717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.895 [2024-07-13 21:13:19.717767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.895 [2024-07-13 21:13:19.717780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:05.895 [2024-07-13 21:13:19.717790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.895 [2024-07-13 21:13:19.717799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.895 [2024-07-13 21:13:19.717890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.895 [2024-07-13 21:13:19.717907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:05.895 [2024-07-13 21:13:19.717924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.895 [2024-07-13 21:13:19.717948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.895 [2024-07-13 21:13:19.717999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.895 [2024-07-13 21:13:19.718013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:05.895 [2024-07-13 21:13:19.718023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.895 [2024-07-13 21:13:19.718033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.895 [2024-07-13 21:13:19.792732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.895 [2024-07-13 21:13:19.792788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:05.895 [2024-07-13 21:13:19.792818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.896 [2024-07-13 21:13:19.792828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.155 [2024-07-13 21:13:19.824111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.155 [2024-07-13 21:13:19.824162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.155 [2024-07-13 21:13:19.824206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.155 [2024-07-13 21:13:19.824232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.155 [2024-07-13 21:13:19.824300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.155 [2024-07-13 21:13:19.824316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.155 [2024-07-13 21:13:19.824326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.155 [2024-07-13 21:13:19.824342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.155 [2024-07-13 21:13:19.824392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.155 [2024-07-13 21:13:19.824447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.155 [2024-07-13 21:13:19.824476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.155 [2024-07-13 21:13:19.824487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.155 [2024-07-13 21:13:19.824631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.155 [2024-07-13 21:13:19.824652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.155 [2024-07-13 21:13:19.824665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.155 [2024-07-13 21:13:19.824676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.155 [2024-07-13 21:13:19.824745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.155 [2024-07-13 21:13:19.824796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:06.155 [2024-07-13 21:13:19.824823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.155 [2024-07-13 21:13:19.824833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.155 [2024-07-13 21:13:19.824925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.155 [2024-07-13 21:13:19.824947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.155 [2024-07-13 21:13:19.824960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.155 [2024-07-13 21:13:19.824971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.155 [2024-07-13 21:13:19.825032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.155 [2024-07-13 21:13:19.825048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.155 [2024-07-13 21:13:19.825060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.155 [2024-07-13 21:13:19.825070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.155 [2024-07-13 21:13:19.825281] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 437.918 ms, result 0 00:22:07.091 00:22:07.091 00:22:07.091 21:13:20 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:09.005 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:09.006 21:13:22 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:09.006 21:13:22 -- ftl/restore.sh@85 -- # restore_kill 00:22:09.006 21:13:22 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:09.006 21:13:22 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:09.006 21:13:22 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:09.006 21:13:22 -- ftl/restore.sh@32 -- # killprocess 73436 00:22:09.006 21:13:22 -- common/autotest_common.sh@926 -- # '[' -z 73436 ']' 00:22:09.006 Process with pid 73436 is not found 00:22:09.006 Remove shared memory files 00:22:09.006 21:13:22 -- common/autotest_common.sh@930 -- # kill -0 73436 00:22:09.006 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (73436) - No such process 00:22:09.006 21:13:22 -- common/autotest_common.sh@953 -- # echo 'Process with pid 73436 is not found' 00:22:09.006 21:13:22 -- ftl/restore.sh@33 -- # remove_shm 00:22:09.006 21:13:22 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:09.006 21:13:22 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:09.006 21:13:22 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:09.006 21:13:22 -- ftl/common.sh@207 -- # rm -f rm -f 00:22:09.006 21:13:22 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:09.006 21:13:22 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:09.006 00:22:09.006 real 3m32.429s 00:22:09.006 user 3m19.064s 00:22:09.006 sys 0m14.821s 00:22:09.006 21:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:09.006 ************************************ 00:22:09.006 21:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:09.006 END TEST ftl_restore 00:22:09.006 ************************************ 00:22:09.006 21:13:22 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:09.006 21:13:22 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:09.006 21:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:09.006 21:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:09.006 ************************************ 00:22:09.006 START TEST ftl_dirty_shutdown 00:22:09.006 ************************************ 00:22:09.006 21:13:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:09.006 * Looking for test storage... 00:22:09.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:09.006 21:13:22 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:09.006 21:13:22 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:09.006 21:13:22 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:09.006 21:13:22 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:09.006 21:13:22 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:09.006 21:13:22 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:09.006 21:13:22 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:09.006 21:13:22 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:09.006 21:13:22 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:09.006 21:13:22 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:09.006 21:13:22 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:09.006 21:13:22 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:09.006 21:13:22 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:09.006 21:13:22 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:09.006 21:13:22 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:09.006 21:13:22 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:09.006 21:13:22 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:09.006 21:13:22 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:09.006 21:13:22 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:09.006 21:13:22 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:09.006 21:13:22 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:09.006 21:13:22 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:09.006 21:13:22 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:09.006 21:13:22 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:09.006 21:13:22 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:09.006 21:13:22 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:09.006 21:13:22 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:09.006 21:13:22 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@45 -- # svcpid=75687 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:09.006 21:13:22 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75687 00:22:09.006 21:13:22 -- common/autotest_common.sh@819 -- # '[' -z 75687 ']' 00:22:09.006 21:13:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.006 21:13:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:09.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.006 21:13:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.006 21:13:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:09.006 21:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:09.006 [2024-07-13 21:13:22.843226] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:09.006 [2024-07-13 21:13:22.843396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75687 ] 00:22:09.265 [2024-07-13 21:13:23.014036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.523 [2024-07-13 21:13:23.222720] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:09.523 [2024-07-13 21:13:23.223016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.458 21:13:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:10.458 21:13:24 -- common/autotest_common.sh@852 -- # return 0 00:22:10.458 21:13:24 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:22:10.458 21:13:24 -- ftl/common.sh@54 -- # local name=nvme0 00:22:10.458 21:13:24 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:22:10.458 21:13:24 -- ftl/common.sh@56 -- # local size=103424 00:22:10.458 21:13:24 -- ftl/common.sh@59 -- # local base_bdev 00:22:10.458 21:13:24 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:10.717 21:13:24 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:10.717 21:13:24 -- ftl/common.sh@62 -- # local base_size 00:22:10.717 21:13:24 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:10.717 21:13:24 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:22:10.717 21:13:24 -- common/autotest_common.sh@1358 -- # local bdev_info 00:22:10.717 21:13:24 -- common/autotest_common.sh@1359 -- # local bs 00:22:10.717 21:13:24 -- common/autotest_common.sh@1360 -- # local nb 00:22:10.717 21:13:24 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:10.978 21:13:24 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:22:10.978 { 00:22:10.978 "name": "nvme0n1", 00:22:10.978 "aliases": [ 00:22:10.978 "0279ac55-b291-4add-9cae-9a5a1036e4ed" 00:22:10.978 ], 00:22:10.978 "product_name": "NVMe disk", 00:22:10.978 "block_size": 4096, 00:22:10.978 "num_blocks": 1310720, 00:22:10.978 "uuid": "0279ac55-b291-4add-9cae-9a5a1036e4ed", 00:22:10.978 "assigned_rate_limits": { 00:22:10.978 "rw_ios_per_sec": 0, 00:22:10.978 "rw_mbytes_per_sec": 0, 00:22:10.978 "r_mbytes_per_sec": 0, 00:22:10.978 "w_mbytes_per_sec": 0 00:22:10.978 }, 00:22:10.978 "claimed": true, 00:22:10.978 "claim_type": "read_many_write_one", 00:22:10.978 "zoned": false, 00:22:10.978 "supported_io_types": { 00:22:10.978 "read": true, 00:22:10.978 "write": true, 00:22:10.978 "unmap": true, 00:22:10.978 "write_zeroes": true, 00:22:10.978 "flush": true, 00:22:10.978 "reset": true, 00:22:10.978 "compare": true, 00:22:10.978 "compare_and_write": false, 00:22:10.978 "abort": true, 00:22:10.978 "nvme_admin": true, 00:22:10.978 "nvme_io": true 00:22:10.978 }, 00:22:10.978 "driver_specific": { 00:22:10.978 "nvme": [ 00:22:10.978 { 00:22:10.978 "pci_address": "0000:00:07.0", 00:22:10.978 "trid": { 00:22:10.978 "trtype": "PCIe", 00:22:10.978 "traddr": "0000:00:07.0" 00:22:10.978 }, 00:22:10.978 "ctrlr_data": { 00:22:10.978 "cntlid": 0, 00:22:10.978 "vendor_id": "0x1b36", 00:22:10.978 "model_number": "QEMU NVMe Ctrl", 00:22:10.978 "serial_number": "12341", 00:22:10.978 "firmware_revision": "8.0.0", 00:22:10.978 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:10.978 "oacs": { 00:22:10.978 "security": 0, 00:22:10.978 "format": 1, 00:22:10.978 "firmware": 0, 00:22:10.978 "ns_manage": 1 00:22:10.978 }, 00:22:10.978 "multi_ctrlr": false, 00:22:10.978 "ana_reporting": false 00:22:10.978 }, 00:22:10.978 "vs": { 00:22:10.978 "nvme_version": "1.4" 00:22:10.978 }, 00:22:10.978 "ns_data": { 00:22:10.978 "id": 1, 00:22:10.978 "can_share": false 00:22:10.978 } 00:22:10.978 } 00:22:10.978 ], 00:22:10.978 "mp_policy": "active_passive" 00:22:10.978 } 00:22:10.978 } 00:22:10.978 ]' 00:22:10.978 21:13:24 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:22:10.978 21:13:24 -- common/autotest_common.sh@1362 -- # bs=4096 00:22:10.978 21:13:24 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:22:11.237 21:13:24 -- common/autotest_common.sh@1363 -- # nb=1310720 00:22:11.237 21:13:24 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:22:11.237 21:13:24 -- common/autotest_common.sh@1367 -- # echo 5120 00:22:11.237 21:13:24 -- ftl/common.sh@63 -- # base_size=5120 00:22:11.237 21:13:24 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:11.237 21:13:24 -- ftl/common.sh@67 -- # clear_lvols 00:22:11.237 21:13:24 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:11.237 21:13:24 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:11.237 21:13:25 -- ftl/common.sh@28 -- # stores=c0ff64b5-7271-4fa6-b4bb-96dec14f331f 00:22:11.237 21:13:25 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:11.237 21:13:25 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c0ff64b5-7271-4fa6-b4bb-96dec14f331f 00:22:11.496 21:13:25 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:11.755 21:13:25 -- ftl/common.sh@68 -- # lvs=9eeba675-a7ae-434d-851a-df33acc583f2 00:22:11.755 21:13:25 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9eeba675-a7ae-434d-851a-df33acc583f2 00:22:12.013 21:13:25 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=4a3b0780-abf5-455a-b38a-137d0a273a81 00:22:12.013 21:13:25 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:22:12.013 21:13:25 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 4a3b0780-abf5-455a-b38a-137d0a273a81 00:22:12.013 21:13:25 -- ftl/common.sh@35 -- # local name=nvc0 00:22:12.013 21:13:25 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:22:12.013 21:13:25 -- ftl/common.sh@37 -- # local base_bdev=4a3b0780-abf5-455a-b38a-137d0a273a81 00:22:12.013 21:13:25 -- ftl/common.sh@38 -- # local cache_size= 00:22:12.013 21:13:25 -- ftl/common.sh@41 -- # get_bdev_size 4a3b0780-abf5-455a-b38a-137d0a273a81 00:22:12.013 21:13:25 -- common/autotest_common.sh@1357 -- # local bdev_name=4a3b0780-abf5-455a-b38a-137d0a273a81 00:22:12.013 21:13:25 -- common/autotest_common.sh@1358 -- # local bdev_info 00:22:12.013 21:13:25 -- common/autotest_common.sh@1359 -- # local bs 00:22:12.013 21:13:25 -- common/autotest_common.sh@1360 -- # local nb 00:22:12.013 21:13:25 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a3b0780-abf5-455a-b38a-137d0a273a81 00:22:12.272 21:13:26 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:22:12.272 { 00:22:12.272 "name": "4a3b0780-abf5-455a-b38a-137d0a273a81", 00:22:12.272 "aliases": [ 00:22:12.272 "lvs/nvme0n1p0" 00:22:12.272 ], 00:22:12.272 "product_name": "Logical Volume", 00:22:12.272 "block_size": 4096, 00:22:12.272 "num_blocks": 26476544, 00:22:12.272 "uuid": "4a3b0780-abf5-455a-b38a-137d0a273a81", 00:22:12.272 "assigned_rate_limits": { 00:22:12.272 "rw_ios_per_sec": 0, 00:22:12.272 "rw_mbytes_per_sec": 0, 00:22:12.272 "r_mbytes_per_sec": 0, 00:22:12.272 "w_mbytes_per_sec": 0 00:22:12.272 }, 00:22:12.272 "claimed": false, 00:22:12.272 "zoned": false, 00:22:12.272 "supported_io_types": { 00:22:12.272 "read": true, 00:22:12.272 "write": true, 00:22:12.272 "unmap": true, 00:22:12.272 "write_zeroes": true, 00:22:12.272 "flush": false, 00:22:12.272 "reset": true, 00:22:12.272 "compare": false, 00:22:12.272 "compare_and_write": false, 00:22:12.272 "abort": false, 00:22:12.272 "nvme_admin": false, 00:22:12.272 "nvme_io": false 00:22:12.272 }, 00:22:12.272 "driver_specific": { 00:22:12.272 "lvol": { 00:22:12.272 "lvol_store_uuid": "9eeba675-a7ae-434d-851a-df33acc583f2", 00:22:12.272 "base_bdev": "nvme0n1", 00:22:12.272 "thin_provision": true, 00:22:12.272 "snapshot": false, 00:22:12.272 "clone": false, 00:22:12.272 "esnap_clone": false 00:22:12.272 } 00:22:12.272 } 00:22:12.272 } 00:22:12.272 ]' 00:22:12.272 21:13:26 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:22:12.272 21:13:26 -- common/autotest_common.sh@1362 -- # bs=4096 00:22:12.272 21:13:26 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:22:12.272 21:13:26 -- common/autotest_common.sh@1363 -- # nb=26476544 00:22:12.272 21:13:26 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:22:12.272 21:13:26 -- common/autotest_common.sh@1367 -- # echo 103424 00:22:12.272 21:13:26 -- ftl/common.sh@41 -- # local base_size=5171 00:22:12.272 21:13:26 -- ftl/common.sh@44 -- # local nvc_bdev 00:22:12.272 21:13:26 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:22:12.530 21:13:26 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:12.530 21:13:26 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:12.530 21:13:26 -- ftl/common.sh@48 -- # get_bdev_size 4a3b0780-abf5-455a-b38a-137d0a273a81 00:22:12.530 21:13:26 -- common/autotest_common.sh@1357 -- # local bdev_name=4a3b0780-abf5-455a-b38a-137d0a273a81 00:22:12.530 21:13:26 -- common/autotest_common.sh@1358 -- # local bdev_info 00:22:12.530 21:13:26 -- common/autotest_common.sh@1359 -- # local bs 00:22:12.530 21:13:26 -- common/autotest_common.sh@1360 -- # local nb 00:22:12.530 21:13:26 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a3b0780-abf5-455a-b38a-137d0a273a81 00:22:12.789 21:13:26 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:22:12.789 { 00:22:12.789 "name": "4a3b0780-abf5-455a-b38a-137d0a273a81", 00:22:12.789 "aliases": [ 00:22:12.789 "lvs/nvme0n1p0" 00:22:12.789 ], 00:22:12.789 "product_name": "Logical Volume", 00:22:12.789 "block_size": 4096, 00:22:12.789 "num_blocks": 26476544, 00:22:12.789 "uuid": "4a3b0780-abf5-455a-b38a-137d0a273a81", 00:22:12.789 "assigned_rate_limits": { 00:22:12.789 "rw_ios_per_sec": 0, 00:22:12.789 "rw_mbytes_per_sec": 0, 00:22:12.789 "r_mbytes_per_sec": 0, 00:22:12.789 "w_mbytes_per_sec": 0 00:22:12.789 }, 00:22:12.789 "claimed": false, 00:22:12.789 "zoned": false, 00:22:12.789 "supported_io_types": { 00:22:12.789 "read": true, 00:22:12.789 "write": true, 00:22:12.789 "unmap": true, 00:22:12.789 "write_zeroes": true, 00:22:12.789 "flush": false, 00:22:12.789 "reset": true, 00:22:12.789 "compare": false, 00:22:12.789 "compare_and_write": false, 00:22:12.789 "abort": false, 00:22:12.789 "nvme_admin": false, 00:22:12.789 "nvme_io": false 00:22:12.789 }, 00:22:12.789 "driver_specific": { 00:22:12.789 "lvol": { 00:22:12.789 "lvol_store_uuid": "9eeba675-a7ae-434d-851a-df33acc583f2", 00:22:12.789 "base_bdev": "nvme0n1", 00:22:12.789 "thin_provision": true, 00:22:12.789 "snapshot": false, 00:22:12.789 "clone": false, 00:22:12.789 "esnap_clone": false 00:22:12.789 } 00:22:12.789 } 00:22:12.789 } 00:22:12.789 ]' 00:22:12.789 21:13:26 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:22:12.789 21:13:26 -- common/autotest_common.sh@1362 -- # bs=4096 00:22:12.789 21:13:26 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:22:12.789 21:13:26 -- common/autotest_common.sh@1363 -- # nb=26476544 00:22:12.789 21:13:26 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:22:12.789 21:13:26 -- common/autotest_common.sh@1367 -- # echo 103424 00:22:12.789 21:13:26 -- ftl/common.sh@48 -- # cache_size=5171 00:22:12.789 21:13:26 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:13.048 21:13:26 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:13.048 21:13:26 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 4a3b0780-abf5-455a-b38a-137d0a273a81 00:22:13.048 21:13:26 -- common/autotest_common.sh@1357 -- # local bdev_name=4a3b0780-abf5-455a-b38a-137d0a273a81 00:22:13.048 21:13:26 -- common/autotest_common.sh@1358 -- # local bdev_info 00:22:13.048 21:13:26 -- common/autotest_common.sh@1359 -- # local bs 00:22:13.048 21:13:26 -- common/autotest_common.sh@1360 -- # local nb 00:22:13.048 21:13:26 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a3b0780-abf5-455a-b38a-137d0a273a81 00:22:13.306 21:13:27 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:22:13.306 { 00:22:13.306 "name": "4a3b0780-abf5-455a-b38a-137d0a273a81", 00:22:13.306 "aliases": [ 00:22:13.306 "lvs/nvme0n1p0" 00:22:13.306 ], 00:22:13.306 "product_name": "Logical Volume", 00:22:13.306 "block_size": 4096, 00:22:13.306 "num_blocks": 26476544, 00:22:13.306 "uuid": "4a3b0780-abf5-455a-b38a-137d0a273a81", 00:22:13.306 "assigned_rate_limits": { 00:22:13.306 "rw_ios_per_sec": 0, 00:22:13.306 "rw_mbytes_per_sec": 0, 00:22:13.306 "r_mbytes_per_sec": 0, 00:22:13.306 "w_mbytes_per_sec": 0 00:22:13.306 }, 00:22:13.306 "claimed": false, 00:22:13.306 "zoned": false, 00:22:13.306 "supported_io_types": { 00:22:13.306 "read": true, 00:22:13.306 "write": true, 00:22:13.306 "unmap": true, 00:22:13.306 "write_zeroes": true, 00:22:13.306 "flush": false, 00:22:13.306 "reset": true, 00:22:13.306 "compare": false, 00:22:13.306 "compare_and_write": false, 00:22:13.306 "abort": false, 00:22:13.306 "nvme_admin": false, 00:22:13.306 "nvme_io": false 00:22:13.306 }, 00:22:13.306 "driver_specific": { 00:22:13.306 "lvol": { 00:22:13.306 "lvol_store_uuid": "9eeba675-a7ae-434d-851a-df33acc583f2", 00:22:13.306 "base_bdev": "nvme0n1", 00:22:13.306 "thin_provision": true, 00:22:13.306 "snapshot": false, 00:22:13.306 "clone": false, 00:22:13.306 "esnap_clone": false 00:22:13.306 } 00:22:13.306 } 00:22:13.306 } 00:22:13.306 ]' 00:22:13.306 21:13:27 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:22:13.306 21:13:27 -- common/autotest_common.sh@1362 -- # bs=4096 00:22:13.306 21:13:27 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:22:13.306 21:13:27 -- common/autotest_common.sh@1363 -- # nb=26476544 00:22:13.306 21:13:27 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:22:13.306 21:13:27 -- common/autotest_common.sh@1367 -- # echo 103424 00:22:13.306 21:13:27 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:13.306 21:13:27 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4a3b0780-abf5-455a-b38a-137d0a273a81 --l2p_dram_limit 10' 00:22:13.306 21:13:27 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:13.306 21:13:27 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:22:13.306 21:13:27 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:13.306 21:13:27 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4a3b0780-abf5-455a-b38a-137d0a273a81 --l2p_dram_limit 10 -c nvc0n1p0 00:22:13.565 [2024-07-13 21:13:27.303009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.565 [2024-07-13 21:13:27.303074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:13.565 [2024-07-13 21:13:27.303111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:13.565 [2024-07-13 21:13:27.303122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.565 [2024-07-13 21:13:27.303208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.565 [2024-07-13 21:13:27.303242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.565 [2024-07-13 21:13:27.303256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:13.565 [2024-07-13 21:13:27.303266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.565 [2024-07-13 21:13:27.303296] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:13.565 [2024-07-13 21:13:27.304207] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:13.565 [2024-07-13 21:13:27.304264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.565 [2024-07-13 21:13:27.304278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.565 [2024-07-13 21:13:27.304291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:22:13.565 [2024-07-13 21:13:27.304302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.565 [2024-07-13 21:13:27.304531] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d123a481-2c73-4fd0-aeba-33bdf8de3021 00:22:13.565 [2024-07-13 21:13:27.305509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.565 [2024-07-13 21:13:27.305566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:13.565 [2024-07-13 21:13:27.305580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:13.565 [2024-07-13 21:13:27.305593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.565 [2024-07-13 21:13:27.309629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.565 [2024-07-13 21:13:27.309673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.565 [2024-07-13 21:13:27.309703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.987 ms 00:22:13.565 [2024-07-13 21:13:27.309715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.565 [2024-07-13 21:13:27.309911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.565 [2024-07-13 21:13:27.309937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.565 [2024-07-13 21:13:27.309950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:22:13.565 [2024-07-13 21:13:27.309967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.565 [2024-07-13 21:13:27.310043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.565 [2024-07-13 21:13:27.310064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:13.565 [2024-07-13 21:13:27.310076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:13.565 [2024-07-13 21:13:27.310093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.565 [2024-07-13 21:13:27.310126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:13.565 [2024-07-13 21:13:27.313972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.565 [2024-07-13 21:13:27.314009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.565 [2024-07-13 21:13:27.314042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.855 ms 00:22:13.565 [2024-07-13 21:13:27.314052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.565 [2024-07-13 21:13:27.314093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.565 [2024-07-13 21:13:27.314108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:13.565 [2024-07-13 21:13:27.314120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:13.565 [2024-07-13 21:13:27.314130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.565 [2024-07-13 21:13:27.314194] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:13.565 [2024-07-13 21:13:27.314331] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:13.565 [2024-07-13 21:13:27.314365] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:13.565 [2024-07-13 21:13:27.314379] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:13.565 [2024-07-13 21:13:27.314394] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:13.565 [2024-07-13 21:13:27.314407] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:13.565 [2024-07-13 21:13:27.314420] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:13.565 [2024-07-13 21:13:27.314431] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:13.565 [2024-07-13 21:13:27.314442] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:13.565 [2024-07-13 21:13:27.314457] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:13.565 [2024-07-13 21:13:27.314469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.565 [2024-07-13 21:13:27.314480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:13.565 [2024-07-13 21:13:27.314511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:22:13.565 [2024-07-13 21:13:27.314523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.565 [2024-07-13 21:13:27.314593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.565 [2024-07-13 21:13:27.314607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:13.565 [2024-07-13 21:13:27.314620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:13.565 [2024-07-13 21:13:27.314630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.565 [2024-07-13 21:13:27.314712] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:13.565 [2024-07-13 21:13:27.314730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:13.565 [2024-07-13 21:13:27.314745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.565 [2024-07-13 21:13:27.314756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.565 [2024-07-13 21:13:27.314769] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:13.565 [2024-07-13 21:13:27.314779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:13.565 [2024-07-13 21:13:27.314791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:13.566 [2024-07-13 21:13:27.314801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:13.566 [2024-07-13 21:13:27.314813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:13.566 [2024-07-13 21:13:27.314823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.566 [2024-07-13 21:13:27.314847] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:13.566 [2024-07-13 21:13:27.314860] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:13.566 [2024-07-13 21:13:27.314874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.566 [2024-07-13 21:13:27.314885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:13.566 [2024-07-13 21:13:27.314897] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:13.566 [2024-07-13 21:13:27.314906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.566 [2024-07-13 21:13:27.314920] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:13.566 [2024-07-13 21:13:27.314930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:13.566 [2024-07-13 21:13:27.314941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.566 [2024-07-13 21:13:27.314953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:13.566 [2024-07-13 21:13:27.314965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:13.566 [2024-07-13 21:13:27.314975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:13.566 [2024-07-13 21:13:27.314987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:13.566 [2024-07-13 21:13:27.314997] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:13.566 [2024-07-13 21:13:27.315009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:13.566 [2024-07-13 21:13:27.315019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:13.566 [2024-07-13 21:13:27.315031] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:13.566 [2024-07-13 21:13:27.315041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:13.566 [2024-07-13 21:13:27.315052] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:13.566 [2024-07-13 21:13:27.315062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:13.566 [2024-07-13 21:13:27.315074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:13.566 [2024-07-13 21:13:27.315083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:13.566 [2024-07-13 21:13:27.315097] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:13.566 [2024-07-13 21:13:27.315107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:13.566 [2024-07-13 21:13:27.315118] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:13.566 [2024-07-13 21:13:27.315128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:13.566 [2024-07-13 21:13:27.315140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.566 [2024-07-13 21:13:27.315150] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:13.566 [2024-07-13 21:13:27.315163] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:13.566 [2024-07-13 21:13:27.315173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.566 [2024-07-13 21:13:27.315184] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:13.566 [2024-07-13 21:13:27.315195] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:13.566 [2024-07-13 21:13:27.315208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.566 [2024-07-13 21:13:27.315218] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.566 [2024-07-13 21:13:27.315231] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:13.566 [2024-07-13 21:13:27.315241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:13.566 [2024-07-13 21:13:27.315253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:13.566 [2024-07-13 21:13:27.315263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:13.566 [2024-07-13 21:13:27.315276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:13.566 [2024-07-13 21:13:27.315286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:13.566 [2024-07-13 21:13:27.315299] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:13.566 [2024-07-13 21:13:27.315314] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.566 [2024-07-13 21:13:27.315330] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:13.566 [2024-07-13 21:13:27.315341] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:13.566 [2024-07-13 21:13:27.315354] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:13.566 [2024-07-13 21:13:27.315364] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:13.566 [2024-07-13 21:13:27.315377] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:13.566 [2024-07-13 21:13:27.315388] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:13.566 [2024-07-13 21:13:27.315400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:13.566 [2024-07-13 21:13:27.315410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:13.566 [2024-07-13 21:13:27.315423] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:13.566 [2024-07-13 21:13:27.315433] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:13.566 [2024-07-13 21:13:27.315445] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:13.566 [2024-07-13 21:13:27.315457] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:13.566 [2024-07-13 21:13:27.315473] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:13.566 [2024-07-13 21:13:27.315484] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:13.566 [2024-07-13 21:13:27.315497] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.566 [2024-07-13 21:13:27.315509] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:13.566 [2024-07-13 21:13:27.315522] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:13.566 [2024-07-13 21:13:27.315533] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:13.566 [2024-07-13 21:13:27.315545] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:13.566 [2024-07-13 21:13:27.315557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.566 [2024-07-13 21:13:27.315570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:13.566 [2024-07-13 21:13:27.315580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:22:13.566 [2024-07-13 21:13:27.315592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.566 [2024-07-13 21:13:27.330571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.566 [2024-07-13 21:13:27.330612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.566 [2024-07-13 21:13:27.330645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.929 ms 00:22:13.566 [2024-07-13 21:13:27.330657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.566 [2024-07-13 21:13:27.330742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.566 [2024-07-13 21:13:27.330760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:13.566 [2024-07-13 21:13:27.330772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:13.566 [2024-07-13 21:13:27.330783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.566 [2024-07-13 21:13:27.360864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.566 [2024-07-13 21:13:27.360913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.566 [2024-07-13 21:13:27.360944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.994 ms 00:22:13.566 [2024-07-13 21:13:27.360956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.566 [2024-07-13 21:13:27.360995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.566 [2024-07-13 21:13:27.361014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.566 [2024-07-13 21:13:27.361025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:13.566 [2024-07-13 21:13:27.361037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.566 [2024-07-13 21:13:27.361431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.566 [2024-07-13 21:13:27.361482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.566 [2024-07-13 21:13:27.361496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:22:13.566 [2024-07-13 21:13:27.361511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.566 [2024-07-13 21:13:27.361637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.566 [2024-07-13 21:13:27.361658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.566 [2024-07-13 21:13:27.361670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:13.566 [2024-07-13 21:13:27.361682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.566 [2024-07-13 21:13:27.376703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.566 [2024-07-13 21:13:27.376775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.566 [2024-07-13 21:13:27.376806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.998 ms 00:22:13.566 [2024-07-13 21:13:27.376819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.566 [2024-07-13 21:13:27.387379] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:13.566 [2024-07-13 21:13:27.389907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.566 [2024-07-13 21:13:27.389941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:13.566 [2024-07-13 21:13:27.389974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.980 ms 00:22:13.566 [2024-07-13 21:13:27.389985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.566 [2024-07-13 21:13:27.457925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.566 [2024-07-13 21:13:27.457993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:13.566 [2024-07-13 21:13:27.458030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.904 ms 00:22:13.566 [2024-07-13 21:13:27.458040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.566 [2024-07-13 21:13:27.458097] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:22:13.566 [2024-07-13 21:13:27.458116] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:22:16.099 [2024-07-13 21:13:29.979643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.099 [2024-07-13 21:13:29.979723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:16.099 [2024-07-13 21:13:29.979775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2521.555 ms 00:22:16.099 [2024-07-13 21:13:29.979786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.099 [2024-07-13 21:13:29.980006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.099 [2024-07-13 21:13:29.980025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:16.099 [2024-07-13 21:13:29.980071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:22:16.099 [2024-07-13 21:13:29.980089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.099 [2024-07-13 21:13:30.005303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.099 [2024-07-13 21:13:30.005358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:16.099 [2024-07-13 21:13:30.005393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.148 ms 00:22:16.099 [2024-07-13 21:13:30.005412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.379 [2024-07-13 21:13:30.033623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.379 [2024-07-13 21:13:30.033674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:16.379 [2024-07-13 21:13:30.033710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.163 ms 00:22:16.379 [2024-07-13 21:13:30.033720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.379 [2024-07-13 21:13:30.034159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.379 [2024-07-13 21:13:30.034196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:16.379 [2024-07-13 21:13:30.034213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:22:16.379 [2024-07-13 21:13:30.034225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.379 [2024-07-13 21:13:30.100898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.379 [2024-07-13 21:13:30.100961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:16.379 [2024-07-13 21:13:30.100997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.596 ms 00:22:16.379 [2024-07-13 21:13:30.101008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.379 [2024-07-13 21:13:30.126097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.379 [2024-07-13 21:13:30.126133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:16.379 [2024-07-13 21:13:30.126167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.040 ms 00:22:16.379 [2024-07-13 21:13:30.126180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.379 [2024-07-13 21:13:30.127771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.379 [2024-07-13 21:13:30.127806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:16.379 [2024-07-13 21:13:30.127839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:22:16.379 [2024-07-13 21:13:30.127859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.379 [2024-07-13 21:13:30.152420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.379 [2024-07-13 21:13:30.152477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:16.379 [2024-07-13 21:13:30.152511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.512 ms 00:22:16.379 [2024-07-13 21:13:30.152521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.379 [2024-07-13 21:13:30.152577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.379 [2024-07-13 21:13:30.152596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:16.379 [2024-07-13 21:13:30.152610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:16.379 [2024-07-13 21:13:30.152620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.379 [2024-07-13 21:13:30.152745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.379 [2024-07-13 21:13:30.152762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:16.379 [2024-07-13 21:13:30.152779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:16.379 [2024-07-13 21:13:30.152789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.379 [2024-07-13 21:13:30.153882] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2850.335 ms, result 0 00:22:16.379 { 00:22:16.379 "name": "ftl0", 00:22:16.379 "uuid": "d123a481-2c73-4fd0-aeba-33bdf8de3021" 00:22:16.379 } 00:22:16.379 21:13:30 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:16.379 21:13:30 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:16.638 21:13:30 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:16.638 21:13:30 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:16.638 21:13:30 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:16.897 /dev/nbd0 00:22:16.897 21:13:30 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:16.897 21:13:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:16.897 21:13:30 -- common/autotest_common.sh@857 -- # local i 00:22:16.897 21:13:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:16.897 21:13:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:16.897 21:13:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:16.897 21:13:30 -- common/autotest_common.sh@861 -- # break 00:22:16.897 21:13:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:16.897 21:13:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:16.897 21:13:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:16.897 1+0 records in 00:22:16.897 1+0 records out 00:22:16.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485273 s, 8.4 MB/s 00:22:16.897 21:13:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:16.897 21:13:30 -- common/autotest_common.sh@874 -- # size=4096 00:22:16.897 21:13:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:16.897 21:13:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:16.897 21:13:30 -- common/autotest_common.sh@877 -- # return 0 00:22:16.897 21:13:30 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:16.897 [2024-07-13 21:13:30.795997] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:16.897 [2024-07-13 21:13:30.796183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75830 ] 00:22:17.156 [2024-07-13 21:13:30.967506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.415 [2024-07-13 21:13:31.186777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.477  Copying: 214/1024 [MB] (214 MBps) Copying: 429/1024 [MB] (214 MBps) Copying: 645/1024 [MB] (216 MBps) Copying: 852/1024 [MB] (207 MBps) Copying: 1024/1024 [MB] (average 212 MBps) 00:22:23.477 00:22:23.477 21:13:37 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:25.384 21:13:38 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:25.384 [2024-07-13 21:13:39.068419] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:25.384 [2024-07-13 21:13:39.068633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75915 ] 00:22:25.384 [2024-07-13 21:13:39.235245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.688 [2024-07-13 21:13:39.394262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.763  Copying: 12/1024 [MB] (12 MBps) Copying: 26/1024 [MB] (14 MBps) Copying: 41/1024 [MB] (14 MBps) Copying: 56/1024 [MB] (14 MBps) Copying: 70/1024 [MB] (14 MBps) Copying: 85/1024 [MB] (15 MBps) Copying: 101/1024 [MB] (15 MBps) Copying: 116/1024 [MB] (15 MBps) Copying: 130/1024 [MB] (14 MBps) Copying: 145/1024 [MB] (14 MBps) Copying: 159/1024 [MB] (14 MBps) Copying: 174/1024 [MB] (14 MBps) Copying: 188/1024 [MB] (14 MBps) Copying: 203/1024 [MB] (14 MBps) Copying: 218/1024 [MB] (14 MBps) Copying: 232/1024 [MB] (14 MBps) Copying: 248/1024 [MB] (15 MBps) Copying: 262/1024 [MB] (14 MBps) Copying: 277/1024 [MB] (14 MBps) Copying: 292/1024 [MB] (14 MBps) Copying: 307/1024 [MB] (14 MBps) Copying: 322/1024 [MB] (15 MBps) Copying: 337/1024 [MB] (14 MBps) Copying: 351/1024 [MB] (14 MBps) Copying: 366/1024 [MB] (14 MBps) Copying: 381/1024 [MB] (14 MBps) Copying: 395/1024 [MB] (14 MBps) Copying: 410/1024 [MB] (14 MBps) Copying: 424/1024 [MB] (14 MBps) Copying: 439/1024 [MB] (14 MBps) Copying: 454/1024 [MB] (14 MBps) Copying: 469/1024 [MB] (14 MBps) Copying: 484/1024 [MB] (15 MBps) Copying: 498/1024 [MB] (14 MBps) Copying: 514/1024 [MB] (15 MBps) Copying: 529/1024 [MB] (15 MBps) Copying: 544/1024 [MB] (14 MBps) Copying: 559/1024 [MB] (15 MBps) Copying: 574/1024 [MB] (15 MBps) Copying: 590/1024 [MB] (15 MBps) Copying: 604/1024 [MB] (14 MBps) Copying: 619/1024 [MB] (14 MBps) Copying: 634/1024 [MB] (15 MBps) Copying: 650/1024 [MB] (15 MBps) Copying: 665/1024 [MB] (15 MBps) Copying: 680/1024 [MB] (15 MBps) Copying: 695/1024 [MB] (15 MBps) Copying: 710/1024 [MB] (14 MBps) Copying: 725/1024 [MB] (15 MBps) Copying: 740/1024 [MB] (14 MBps) Copying: 755/1024 [MB] (15 MBps) Copying: 770/1024 [MB] (14 MBps) Copying: 785/1024 [MB] (14 MBps) Copying: 799/1024 [MB] (14 MBps) Copying: 814/1024 [MB] (14 MBps) Copying: 829/1024 [MB] (15 MBps) Copying: 844/1024 [MB] (15 MBps) Copying: 859/1024 [MB] (15 MBps) Copying: 875/1024 [MB] (15 MBps) Copying: 890/1024 [MB] (14 MBps) Copying: 904/1024 [MB] (14 MBps) Copying: 919/1024 [MB] (14 MBps) Copying: 934/1024 [MB] (15 MBps) Copying: 949/1024 [MB] (15 MBps) Copying: 964/1024 [MB] (15 MBps) Copying: 979/1024 [MB] (14 MBps) Copying: 994/1024 [MB] (14 MBps) Copying: 1008/1024 [MB] (14 MBps) Copying: 1024/1024 [MB] (average 14 MBps) 00:23:35.763 00:23:35.763 21:14:49 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:35.763 21:14:49 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:36.022 21:14:49 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:36.282 [2024-07-13 21:14:49.998390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.282 [2024-07-13 21:14:49.998446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:36.282 [2024-07-13 21:14:49.998489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:36.282 [2024-07-13 21:14:49.998503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.282 [2024-07-13 21:14:49.998545] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:36.282 [2024-07-13 21:14:50.001334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.282 [2024-07-13 21:14:50.001365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:36.282 [2024-07-13 21:14:50.001397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.763 ms 00:23:36.282 [2024-07-13 21:14:50.001406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.282 [2024-07-13 21:14:50.003482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.282 [2024-07-13 21:14:50.003527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:36.282 [2024-07-13 21:14:50.003550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.041 ms 00:23:36.282 [2024-07-13 21:14:50.003562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.282 [2024-07-13 21:14:50.019300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.282 [2024-07-13 21:14:50.019358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:36.282 [2024-07-13 21:14:50.019394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.702 ms 00:23:36.282 [2024-07-13 21:14:50.019407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.282 [2024-07-13 21:14:50.026601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.282 [2024-07-13 21:14:50.026649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:36.282 [2024-07-13 21:14:50.026682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.144 ms 00:23:36.282 [2024-07-13 21:14:50.026693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.282 [2024-07-13 21:14:50.052209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.282 [2024-07-13 21:14:50.052246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:36.282 [2024-07-13 21:14:50.052281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.405 ms 00:23:36.282 [2024-07-13 21:14:50.052291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.282 [2024-07-13 21:14:50.067860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.282 [2024-07-13 21:14:50.067913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:36.282 [2024-07-13 21:14:50.067956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.522 ms 00:23:36.282 [2024-07-13 21:14:50.067967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.282 [2024-07-13 21:14:50.068127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.282 [2024-07-13 21:14:50.068158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:36.282 [2024-07-13 21:14:50.068189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:23:36.282 [2024-07-13 21:14:50.068199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.282 [2024-07-13 21:14:50.093162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.282 [2024-07-13 21:14:50.093198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:36.282 [2024-07-13 21:14:50.093230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.924 ms 00:23:36.282 [2024-07-13 21:14:50.093240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.282 [2024-07-13 21:14:50.117353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.282 [2024-07-13 21:14:50.117389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:36.282 [2024-07-13 21:14:50.117421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.068 ms 00:23:36.282 [2024-07-13 21:14:50.117431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.282 [2024-07-13 21:14:50.142087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.282 [2024-07-13 21:14:50.142137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:36.282 [2024-07-13 21:14:50.142170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.610 ms 00:23:36.282 [2024-07-13 21:14:50.142180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.282 [2024-07-13 21:14:50.166533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.282 [2024-07-13 21:14:50.166568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:36.283 [2024-07-13 21:14:50.166601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.242 ms 00:23:36.283 [2024-07-13 21:14:50.166611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.283 [2024-07-13 21:14:50.166657] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:36.283 [2024-07-13 21:14:50.166677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.166994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:36.283 [2024-07-13 21:14:50.167795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:36.284 [2024-07-13 21:14:50.167807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:36.284 [2024-07-13 21:14:50.167818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:36.284 [2024-07-13 21:14:50.167830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:36.284 [2024-07-13 21:14:50.167850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:36.284 [2024-07-13 21:14:50.167864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:36.284 [2024-07-13 21:14:50.167875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:36.284 [2024-07-13 21:14:50.167891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:36.284 [2024-07-13 21:14:50.167902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:36.284 [2024-07-13 21:14:50.167915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:36.284 [2024-07-13 21:14:50.167926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:36.284 [2024-07-13 21:14:50.167938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:36.284 [2024-07-13 21:14:50.167957] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:36.284 [2024-07-13 21:14:50.167970] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d123a481-2c73-4fd0-aeba-33bdf8de3021 00:23:36.284 [2024-07-13 21:14:50.167981] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:36.284 [2024-07-13 21:14:50.167992] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:36.284 [2024-07-13 21:14:50.168005] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:36.284 [2024-07-13 21:14:50.168017] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:36.284 [2024-07-13 21:14:50.168028] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:36.284 [2024-07-13 21:14:50.168040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:36.284 [2024-07-13 21:14:50.168049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:36.284 [2024-07-13 21:14:50.168072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:36.284 [2024-07-13 21:14:50.168081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:36.284 [2024-07-13 21:14:50.168096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.284 [2024-07-13 21:14:50.168106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:36.284 [2024-07-13 21:14:50.168119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:23:36.284 [2024-07-13 21:14:50.168129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.284 [2024-07-13 21:14:50.181740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.284 [2024-07-13 21:14:50.181775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:36.284 [2024-07-13 21:14:50.181808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.566 ms 00:23:36.284 [2024-07-13 21:14:50.181818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.284 [2024-07-13 21:14:50.182096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.284 [2024-07-13 21:14:50.182127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:36.284 [2024-07-13 21:14:50.182142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:23:36.284 [2024-07-13 21:14:50.182153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-07-13 21:14:50.228713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.543 [2024-07-13 21:14:50.228773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.543 [2024-07-13 21:14:50.228808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.543 [2024-07-13 21:14:50.228820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-07-13 21:14:50.228942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.543 [2024-07-13 21:14:50.228958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.543 [2024-07-13 21:14:50.228971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.543 [2024-07-13 21:14:50.228981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-07-13 21:14:50.229114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.543 [2024-07-13 21:14:50.229140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.543 [2024-07-13 21:14:50.229154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.543 [2024-07-13 21:14:50.229165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-07-13 21:14:50.229191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.543 [2024-07-13 21:14:50.229204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.543 [2024-07-13 21:14:50.229216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.543 [2024-07-13 21:14:50.229234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-07-13 21:14:50.306307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.543 [2024-07-13 21:14:50.306362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.543 [2024-07-13 21:14:50.306397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.543 [2024-07-13 21:14:50.306407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-07-13 21:14:50.336409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.543 [2024-07-13 21:14:50.336443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.543 [2024-07-13 21:14:50.336475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.543 [2024-07-13 21:14:50.336486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-07-13 21:14:50.336604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.543 [2024-07-13 21:14:50.336623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.543 [2024-07-13 21:14:50.336639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.543 [2024-07-13 21:14:50.336649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-07-13 21:14:50.336709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.543 [2024-07-13 21:14:50.336739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.543 [2024-07-13 21:14:50.336768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.543 [2024-07-13 21:14:50.336778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-07-13 21:14:50.336956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.543 [2024-07-13 21:14:50.336977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.543 [2024-07-13 21:14:50.336992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.543 [2024-07-13 21:14:50.337005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-07-13 21:14:50.337062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.543 [2024-07-13 21:14:50.337080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:36.543 [2024-07-13 21:14:50.337095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.543 [2024-07-13 21:14:50.337120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-07-13 21:14:50.337170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.543 [2024-07-13 21:14:50.337184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.543 [2024-07-13 21:14:50.337197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.543 [2024-07-13 21:14:50.337210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-07-13 21:14:50.337264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.544 [2024-07-13 21:14:50.337280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.544 [2024-07-13 21:14:50.337293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.544 [2024-07-13 21:14:50.337304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.544 [2024-07-13 21:14:50.337482] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.024 ms, result 0 00:23:36.544 true 00:23:36.544 21:14:50 -- ftl/dirty_shutdown.sh@83 -- # kill -9 75687 00:23:36.544 21:14:50 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75687 00:23:36.544 21:14:50 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:36.544 [2024-07-13 21:14:50.460510] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:36.544 [2024-07-13 21:14:50.460695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76634 ] 00:23:36.802 [2024-07-13 21:14:50.630473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.061 [2024-07-13 21:14:50.777227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.749  Copying: 219/1024 [MB] (219 MBps) Copying: 439/1024 [MB] (219 MBps) Copying: 657/1024 [MB] (217 MBps) Copying: 872/1024 [MB] (215 MBps) Copying: 1024/1024 [MB] (average 216 MBps) 00:23:42.749 00:23:42.749 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75687 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:42.749 21:14:56 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:43.007 [2024-07-13 21:14:56.703306] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:43.007 [2024-07-13 21:14:56.703474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76698 ] 00:23:43.007 [2024-07-13 21:14:56.871438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.265 [2024-07-13 21:14:57.028778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.523 [2024-07-13 21:14:57.277132] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:43.523 [2024-07-13 21:14:57.277214] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:43.523 [2024-07-13 21:14:57.338775] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:43.523 [2024-07-13 21:14:57.339365] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:43.523 [2024-07-13 21:14:57.339630] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:43.783 [2024-07-13 21:14:57.604594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.783 [2024-07-13 21:14:57.604658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:43.783 [2024-07-13 21:14:57.604692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:43.783 [2024-07-13 21:14:57.604702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.783 [2024-07-13 21:14:57.604763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.783 [2024-07-13 21:14:57.604780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:43.783 [2024-07-13 21:14:57.604792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:43.783 [2024-07-13 21:14:57.604801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.783 [2024-07-13 21:14:57.604832] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:43.783 [2024-07-13 21:14:57.605741] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:43.783 [2024-07-13 21:14:57.605801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.783 [2024-07-13 21:14:57.605831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:43.783 [2024-07-13 21:14:57.605856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:23:43.783 [2024-07-13 21:14:57.605907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.783 [2024-07-13 21:14:57.607072] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:43.783 [2024-07-13 21:14:57.619925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.783 [2024-07-13 21:14:57.619961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:43.783 [2024-07-13 21:14:57.619993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.855 ms 00:23:43.783 [2024-07-13 21:14:57.620003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.783 [2024-07-13 21:14:57.620061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.783 [2024-07-13 21:14:57.620078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:43.783 [2024-07-13 21:14:57.620089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:43.783 [2024-07-13 21:14:57.620101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.783 [2024-07-13 21:14:57.624278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.783 [2024-07-13 21:14:57.624327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:43.783 [2024-07-13 21:14:57.624357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.116 ms 00:23:43.783 [2024-07-13 21:14:57.624366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.783 [2024-07-13 21:14:57.624458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.783 [2024-07-13 21:14:57.624476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:43.783 [2024-07-13 21:14:57.624491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:43.783 [2024-07-13 21:14:57.624500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.783 [2024-07-13 21:14:57.624658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.783 [2024-07-13 21:14:57.624674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:43.783 [2024-07-13 21:14:57.624686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:43.783 [2024-07-13 21:14:57.624697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.783 [2024-07-13 21:14:57.624735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:43.783 [2024-07-13 21:14:57.628252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.783 [2024-07-13 21:14:57.628298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:43.783 [2024-07-13 21:14:57.628327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.531 ms 00:23:43.783 [2024-07-13 21:14:57.628336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.783 [2024-07-13 21:14:57.628372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.783 [2024-07-13 21:14:57.628387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:43.783 [2024-07-13 21:14:57.628401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:43.783 [2024-07-13 21:14:57.628410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.783 [2024-07-13 21:14:57.628435] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:43.783 [2024-07-13 21:14:57.628458] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:43.783 [2024-07-13 21:14:57.628494] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:43.784 [2024-07-13 21:14:57.628511] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:43.784 [2024-07-13 21:14:57.628639] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:43.784 [2024-07-13 21:14:57.628660] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:43.784 [2024-07-13 21:14:57.628673] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:43.784 [2024-07-13 21:14:57.628686] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:43.784 [2024-07-13 21:14:57.628699] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:43.784 [2024-07-13 21:14:57.628710] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:43.784 [2024-07-13 21:14:57.628720] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:43.784 [2024-07-13 21:14:57.628729] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:43.784 [2024-07-13 21:14:57.628739] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:43.784 [2024-07-13 21:14:57.628749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.784 [2024-07-13 21:14:57.628760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:43.784 [2024-07-13 21:14:57.628774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:23:43.784 [2024-07-13 21:14:57.628784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.784 [2024-07-13 21:14:57.628882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.784 [2024-07-13 21:14:57.628900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:43.784 [2024-07-13 21:14:57.628912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:43.784 [2024-07-13 21:14:57.628922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.784 [2024-07-13 21:14:57.629017] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:43.784 [2024-07-13 21:14:57.629039] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:43.784 [2024-07-13 21:14:57.629050] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:43.784 [2024-07-13 21:14:57.629061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.784 [2024-07-13 21:14:57.629092] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:43.784 [2024-07-13 21:14:57.629102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:43.784 [2024-07-13 21:14:57.629112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:43.784 [2024-07-13 21:14:57.629121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:43.784 [2024-07-13 21:14:57.629131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:43.784 [2024-07-13 21:14:57.629140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:43.784 [2024-07-13 21:14:57.629148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:43.784 [2024-07-13 21:14:57.629158] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:43.784 [2024-07-13 21:14:57.629166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:43.784 [2024-07-13 21:14:57.629175] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:43.784 [2024-07-13 21:14:57.629186] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:43.784 [2024-07-13 21:14:57.629195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.784 [2024-07-13 21:14:57.629204] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:43.784 [2024-07-13 21:14:57.629225] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:43.784 [2024-07-13 21:14:57.629235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.784 [2024-07-13 21:14:57.629244] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:43.784 [2024-07-13 21:14:57.629253] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:43.784 [2024-07-13 21:14:57.629262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:43.784 [2024-07-13 21:14:57.629272] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:43.784 [2024-07-13 21:14:57.629281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:43.784 [2024-07-13 21:14:57.629290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:43.784 [2024-07-13 21:14:57.629299] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:43.784 [2024-07-13 21:14:57.629307] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:43.784 [2024-07-13 21:14:57.629316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:43.784 [2024-07-13 21:14:57.629325] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:43.784 [2024-07-13 21:14:57.629334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:43.784 [2024-07-13 21:14:57.629343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:43.784 [2024-07-13 21:14:57.629351] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:43.784 [2024-07-13 21:14:57.629360] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:43.784 [2024-07-13 21:14:57.629369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:43.784 [2024-07-13 21:14:57.629378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:43.784 [2024-07-13 21:14:57.629387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:43.784 [2024-07-13 21:14:57.629396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:43.784 [2024-07-13 21:14:57.629404] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:43.784 [2024-07-13 21:14:57.629414] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:43.784 [2024-07-13 21:14:57.629422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:43.784 [2024-07-13 21:14:57.629431] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:43.784 [2024-07-13 21:14:57.629440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:43.784 [2024-07-13 21:14:57.629450] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:43.784 [2024-07-13 21:14:57.629460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.784 [2024-07-13 21:14:57.629470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:43.784 [2024-07-13 21:14:57.629479] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:43.784 [2024-07-13 21:14:57.629489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:43.784 [2024-07-13 21:14:57.629499] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:43.784 [2024-07-13 21:14:57.629508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:43.784 [2024-07-13 21:14:57.629517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:43.784 [2024-07-13 21:14:57.629527] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:43.784 [2024-07-13 21:14:57.629539] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:43.784 [2024-07-13 21:14:57.629550] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:43.784 [2024-07-13 21:14:57.629561] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:43.784 [2024-07-13 21:14:57.629570] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:43.784 [2024-07-13 21:14:57.629580] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:43.784 [2024-07-13 21:14:57.629590] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:43.784 [2024-07-13 21:14:57.629599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:43.784 [2024-07-13 21:14:57.629608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:43.784 [2024-07-13 21:14:57.629618] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:43.784 [2024-07-13 21:14:57.629628] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:43.784 [2024-07-13 21:14:57.629637] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:43.784 [2024-07-13 21:14:57.629647] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:43.784 [2024-07-13 21:14:57.629657] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:43.784 [2024-07-13 21:14:57.629667] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:43.784 [2024-07-13 21:14:57.629676] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:43.784 [2024-07-13 21:14:57.629688] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:43.784 [2024-07-13 21:14:57.629698] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:43.784 [2024-07-13 21:14:57.629708] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:43.784 [2024-07-13 21:14:57.629719] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:43.784 [2024-07-13 21:14:57.629728] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:43.784 [2024-07-13 21:14:57.629739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.784 [2024-07-13 21:14:57.629750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:43.784 [2024-07-13 21:14:57.629765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:23:43.784 [2024-07-13 21:14:57.629774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.784 [2024-07-13 21:14:57.644427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.784 [2024-07-13 21:14:57.644481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:43.784 [2024-07-13 21:14:57.644517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.606 ms 00:23:43.784 [2024-07-13 21:14:57.644527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.784 [2024-07-13 21:14:57.644643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.784 [2024-07-13 21:14:57.644658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:43.784 [2024-07-13 21:14:57.644669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:43.784 [2024-07-13 21:14:57.644680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.784 [2024-07-13 21:14:57.689434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.784 [2024-07-13 21:14:57.689475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:43.784 [2024-07-13 21:14:57.689506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.662 ms 00:23:43.784 [2024-07-13 21:14:57.689516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.785 [2024-07-13 21:14:57.689564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.785 [2024-07-13 21:14:57.689579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:43.785 [2024-07-13 21:14:57.689590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:43.785 [2024-07-13 21:14:57.689599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.785 [2024-07-13 21:14:57.690011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.785 [2024-07-13 21:14:57.690043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:43.785 [2024-07-13 21:14:57.690056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:23:43.785 [2024-07-13 21:14:57.690066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.785 [2024-07-13 21:14:57.690194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.785 [2024-07-13 21:14:57.690221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:43.785 [2024-07-13 21:14:57.690233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:23:43.785 [2024-07-13 21:14:57.690242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.785 [2024-07-13 21:14:57.704082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.785 [2024-07-13 21:14:57.704117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:43.785 [2024-07-13 21:14:57.704146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.812 ms 00:23:43.785 [2024-07-13 21:14:57.704156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.044 [2024-07-13 21:14:57.717967] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:44.044 [2024-07-13 21:14:57.718020] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:44.044 [2024-07-13 21:14:57.718055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.044 [2024-07-13 21:14:57.718065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:44.044 [2024-07-13 21:14:57.718076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.788 ms 00:23:44.044 [2024-07-13 21:14:57.718086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.044 [2024-07-13 21:14:57.741316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.044 [2024-07-13 21:14:57.741366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:44.044 [2024-07-13 21:14:57.741397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.188 ms 00:23:44.044 [2024-07-13 21:14:57.741407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.044 [2024-07-13 21:14:57.753881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.044 [2024-07-13 21:14:57.753932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:44.044 [2024-07-13 21:14:57.753961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.425 ms 00:23:44.044 [2024-07-13 21:14:57.753970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.044 [2024-07-13 21:14:57.766074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.044 [2024-07-13 21:14:57.766125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:44.044 [2024-07-13 21:14:57.766166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.066 ms 00:23:44.044 [2024-07-13 21:14:57.766176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.044 [2024-07-13 21:14:57.766627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.044 [2024-07-13 21:14:57.766657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:44.044 [2024-07-13 21:14:57.766671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:23:44.044 [2024-07-13 21:14:57.766682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.044 [2024-07-13 21:14:57.824926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.044 [2024-07-13 21:14:57.824981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:44.044 [2024-07-13 21:14:57.825014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.222 ms 00:23:44.044 [2024-07-13 21:14:57.825024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.044 [2024-07-13 21:14:57.834801] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:44.044 [2024-07-13 21:14:57.836737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.044 [2024-07-13 21:14:57.836786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:44.044 [2024-07-13 21:14:57.836816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.648 ms 00:23:44.044 [2024-07-13 21:14:57.836826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.044 [2024-07-13 21:14:57.836926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.044 [2024-07-13 21:14:57.836945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:44.044 [2024-07-13 21:14:57.836957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:44.044 [2024-07-13 21:14:57.836966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.044 [2024-07-13 21:14:57.837072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.044 [2024-07-13 21:14:57.837099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:44.044 [2024-07-13 21:14:57.837115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:44.044 [2024-07-13 21:14:57.837125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.044 [2024-07-13 21:14:57.838729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.044 [2024-07-13 21:14:57.838780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:44.044 [2024-07-13 21:14:57.838808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:23:44.044 [2024-07-13 21:14:57.838818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.045 [2024-07-13 21:14:57.838883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.045 [2024-07-13 21:14:57.838900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:44.045 [2024-07-13 21:14:57.838911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:44.045 [2024-07-13 21:14:57.838921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.045 [2024-07-13 21:14:57.838964] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:44.045 [2024-07-13 21:14:57.838979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.045 [2024-07-13 21:14:57.838989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:44.045 [2024-07-13 21:14:57.838999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:44.045 [2024-07-13 21:14:57.839009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.045 [2024-07-13 21:14:57.863381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.045 [2024-07-13 21:14:57.863434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:44.045 [2024-07-13 21:14:57.863471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.331 ms 00:23:44.045 [2024-07-13 21:14:57.863481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.045 [2024-07-13 21:14:57.863553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.045 [2024-07-13 21:14:57.863570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:44.045 [2024-07-13 21:14:57.863581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:44.045 [2024-07-13 21:14:57.863590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.045 [2024-07-13 21:14:57.865031] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.858 ms, result 0 00:24:29.931  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (23 MBps) Copying: 92/1024 [MB] (22 MBps) Copying: 115/1024 [MB] (22 MBps) Copying: 138/1024 [MB] (23 MBps) Copying: 161/1024 [MB] (22 MBps) Copying: 184/1024 [MB] (23 MBps) Copying: 207/1024 [MB] (22 MBps) Copying: 229/1024 [MB] (22 MBps) Copying: 252/1024 [MB] (22 MBps) Copying: 274/1024 [MB] (22 MBps) Copying: 297/1024 [MB] (22 MBps) Copying: 320/1024 [MB] (23 MBps) Copying: 343/1024 [MB] (22 MBps) Copying: 366/1024 [MB] (22 MBps) Copying: 388/1024 [MB] (22 MBps) Copying: 410/1024 [MB] (22 MBps) Copying: 433/1024 [MB] (22 MBps) Copying: 456/1024 [MB] (22 MBps) Copying: 478/1024 [MB] (22 MBps) Copying: 500/1024 [MB] (22 MBps) Copying: 523/1024 [MB] (22 MBps) Copying: 546/1024 [MB] (23 MBps) Copying: 569/1024 [MB] (22 MBps) Copying: 592/1024 [MB] (22 MBps) Copying: 615/1024 [MB] (23 MBps) Copying: 638/1024 [MB] (22 MBps) Copying: 660/1024 [MB] (22 MBps) Copying: 683/1024 [MB] (22 MBps) Copying: 705/1024 [MB] (22 MBps) Copying: 727/1024 [MB] (22 MBps) Copying: 749/1024 [MB] (22 MBps) Copying: 772/1024 [MB] (22 MBps) Copying: 794/1024 [MB] (22 MBps) Copying: 817/1024 [MB] (22 MBps) Copying: 840/1024 [MB] (22 MBps) Copying: 862/1024 [MB] (22 MBps) Copying: 885/1024 [MB] (22 MBps) Copying: 907/1024 [MB] (22 MBps) Copying: 930/1024 [MB] (22 MBps) Copying: 953/1024 [MB] (23 MBps) Copying: 975/1024 [MB] (22 MBps) Copying: 997/1024 [MB] (22 MBps) Copying: 1020/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-13 21:15:43.815025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.931 [2024-07-13 21:15:43.815139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:29.931 [2024-07-13 21:15:43.815161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:29.931 [2024-07-13 21:15:43.815174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.931 [2024-07-13 21:15:43.817180] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:29.931 [2024-07-13 21:15:43.822026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.931 [2024-07-13 21:15:43.822081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:29.931 [2024-07-13 21:15:43.822096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.799 ms 00:24:29.931 [2024-07-13 21:15:43.822115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.931 [2024-07-13 21:15:43.834360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.931 [2024-07-13 21:15:43.834397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:29.931 [2024-07-13 21:15:43.834427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.957 ms 00:24:29.931 [2024-07-13 21:15:43.834437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.931 [2024-07-13 21:15:43.855030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.931 [2024-07-13 21:15:43.855074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:29.931 [2024-07-13 21:15:43.855092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.573 ms 00:24:29.931 [2024-07-13 21:15:43.855119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.190 [2024-07-13 21:15:43.861813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.190 [2024-07-13 21:15:43.861878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:30.190 [2024-07-13 21:15:43.861910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.634 ms 00:24:30.190 [2024-07-13 21:15:43.861921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.190 [2024-07-13 21:15:43.888966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.190 [2024-07-13 21:15:43.889034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:30.190 [2024-07-13 21:15:43.889048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.980 ms 00:24:30.190 [2024-07-13 21:15:43.889058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.190 [2024-07-13 21:15:43.905796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.190 [2024-07-13 21:15:43.905831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:30.190 [2024-07-13 21:15:43.905877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.699 ms 00:24:30.190 [2024-07-13 21:15:43.905887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.190 [2024-07-13 21:15:43.984883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.190 [2024-07-13 21:15:43.984986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:30.190 [2024-07-13 21:15:43.985032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.955 ms 00:24:30.190 [2024-07-13 21:15:43.985065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.190 [2024-07-13 21:15:44.010258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.190 [2024-07-13 21:15:44.010293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:30.190 [2024-07-13 21:15:44.010322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.173 ms 00:24:30.190 [2024-07-13 21:15:44.010331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.190 [2024-07-13 21:15:44.034907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.190 [2024-07-13 21:15:44.034941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:30.190 [2024-07-13 21:15:44.034970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.540 ms 00:24:30.190 [2024-07-13 21:15:44.034993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.190 [2024-07-13 21:15:44.058922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.190 [2024-07-13 21:15:44.058957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:30.190 [2024-07-13 21:15:44.058986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.888 ms 00:24:30.190 [2024-07-13 21:15:44.058995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.190 [2024-07-13 21:15:44.082676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.190 [2024-07-13 21:15:44.082711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:30.190 [2024-07-13 21:15:44.082740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.608 ms 00:24:30.190 [2024-07-13 21:15:44.082750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.190 [2024-07-13 21:15:44.082786] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:30.190 [2024-07-13 21:15:44.082806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 83968 / 261120 wr_cnt: 1 state: open 00:24:30.190 [2024-07-13 21:15:44.082817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:30.190 [2024-07-13 21:15:44.082827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:30.190 [2024-07-13 21:15:44.082853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:30.190 [2024-07-13 21:15:44.082883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:30.190 [2024-07-13 21:15:44.082893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:30.190 [2024-07-13 21:15:44.082903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.082912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.082922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.082932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.082941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.082950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.082960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.082969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.082979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.082988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.082997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:30.191 [2024-07-13 21:15:44.083941] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:30.191 [2024-07-13 21:15:44.083951] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d123a481-2c73-4fd0-aeba-33bdf8de3021 00:24:30.191 [2024-07-13 21:15:44.083961] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 83968 00:24:30.191 [2024-07-13 21:15:44.083977] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 84928 00:24:30.192 [2024-07-13 21:15:44.083995] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 83968 00:24:30.192 [2024-07-13 21:15:44.084008] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0114 00:24:30.192 [2024-07-13 21:15:44.084017] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:30.192 [2024-07-13 21:15:44.084028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:30.192 [2024-07-13 21:15:44.084037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:30.192 [2024-07-13 21:15:44.084045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:30.192 [2024-07-13 21:15:44.084065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:30.192 [2024-07-13 21:15:44.084076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.192 [2024-07-13 21:15:44.084087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:30.192 [2024-07-13 21:15:44.084098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:24:30.192 [2024-07-13 21:15:44.084107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.192 [2024-07-13 21:15:44.098051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.192 [2024-07-13 21:15:44.098083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:30.192 [2024-07-13 21:15:44.098113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.909 ms 00:24:30.192 [2024-07-13 21:15:44.098122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.192 [2024-07-13 21:15:44.098382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.192 [2024-07-13 21:15:44.098414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:30.192 [2024-07-13 21:15:44.098426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:24:30.192 [2024-07-13 21:15:44.098437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.135728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.451 [2024-07-13 21:15:44.135765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:30.451 [2024-07-13 21:15:44.135796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.451 [2024-07-13 21:15:44.135806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.135868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.451 [2024-07-13 21:15:44.135884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:30.451 [2024-07-13 21:15:44.135894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.451 [2024-07-13 21:15:44.135903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.135984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.451 [2024-07-13 21:15:44.136001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:30.451 [2024-07-13 21:15:44.136011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.451 [2024-07-13 21:15:44.136036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.136087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.451 [2024-07-13 21:15:44.136100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:30.451 [2024-07-13 21:15:44.136109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.451 [2024-07-13 21:15:44.136119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.211847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.451 [2024-07-13 21:15:44.211895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:30.451 [2024-07-13 21:15:44.211926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.451 [2024-07-13 21:15:44.211935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.242417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.451 [2024-07-13 21:15:44.242451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:30.451 [2024-07-13 21:15:44.242480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.451 [2024-07-13 21:15:44.242491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.242564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.451 [2024-07-13 21:15:44.242579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:30.451 [2024-07-13 21:15:44.242589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.451 [2024-07-13 21:15:44.242599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.242643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.451 [2024-07-13 21:15:44.242657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:30.451 [2024-07-13 21:15:44.242667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.451 [2024-07-13 21:15:44.242676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.242832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.451 [2024-07-13 21:15:44.242855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:30.451 [2024-07-13 21:15:44.242866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.451 [2024-07-13 21:15:44.242876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.242939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.451 [2024-07-13 21:15:44.242958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:30.451 [2024-07-13 21:15:44.242969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.451 [2024-07-13 21:15:44.242978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.243028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.451 [2024-07-13 21:15:44.243048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:30.451 [2024-07-13 21:15:44.243058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.451 [2024-07-13 21:15:44.243068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.243116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.451 [2024-07-13 21:15:44.243131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:30.451 [2024-07-13 21:15:44.243141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.451 [2024-07-13 21:15:44.243152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.451 [2024-07-13 21:15:44.243279] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 429.114 ms, result 0 00:24:31.830 00:24:31.830 00:24:31.830 21:15:45 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:33.786 21:15:47 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:33.786 [2024-07-13 21:15:47.352450] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:33.786 [2024-07-13 21:15:47.352639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77209 ] 00:24:33.786 [2024-07-13 21:15:47.522332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.079 [2024-07-13 21:15:47.710559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.079 [2024-07-13 21:15:47.963514] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:34.079 [2024-07-13 21:15:47.963592] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:34.341 [2024-07-13 21:15:48.111887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.341 [2024-07-13 21:15:48.111930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:34.341 [2024-07-13 21:15:48.111970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:34.341 [2024-07-13 21:15:48.111980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.341 [2024-07-13 21:15:48.112040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.341 [2024-07-13 21:15:48.112074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:34.341 [2024-07-13 21:15:48.112085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:34.341 [2024-07-13 21:15:48.112095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.341 [2024-07-13 21:15:48.112121] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:34.341 [2024-07-13 21:15:48.113124] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:34.341 [2024-07-13 21:15:48.113176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.341 [2024-07-13 21:15:48.113190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:34.341 [2024-07-13 21:15:48.113201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:24:34.341 [2024-07-13 21:15:48.113211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.341 [2024-07-13 21:15:48.114287] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:34.341 [2024-07-13 21:15:48.126999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.341 [2024-07-13 21:15:48.127053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:34.341 [2024-07-13 21:15:48.127090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.714 ms 00:24:34.341 [2024-07-13 21:15:48.127100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.341 [2024-07-13 21:15:48.127160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.341 [2024-07-13 21:15:48.127177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:34.341 [2024-07-13 21:15:48.127188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:34.341 [2024-07-13 21:15:48.127198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.341 [2024-07-13 21:15:48.131469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.341 [2024-07-13 21:15:48.131505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:34.341 [2024-07-13 21:15:48.131534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.194 ms 00:24:34.341 [2024-07-13 21:15:48.131544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.341 [2024-07-13 21:15:48.131633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.341 [2024-07-13 21:15:48.131650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:34.341 [2024-07-13 21:15:48.131661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:34.341 [2024-07-13 21:15:48.131670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.341 [2024-07-13 21:15:48.131746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.341 [2024-07-13 21:15:48.131781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:34.341 [2024-07-13 21:15:48.131793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:34.341 [2024-07-13 21:15:48.131803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.341 [2024-07-13 21:15:48.131836] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:34.341 [2024-07-13 21:15:48.135412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.341 [2024-07-13 21:15:48.135461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:34.341 [2024-07-13 21:15:48.135494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.588 ms 00:24:34.341 [2024-07-13 21:15:48.135504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.341 [2024-07-13 21:15:48.135562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.341 [2024-07-13 21:15:48.135577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:34.341 [2024-07-13 21:15:48.135588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:34.341 [2024-07-13 21:15:48.135598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.341 [2024-07-13 21:15:48.135621] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:34.341 [2024-07-13 21:15:48.135647] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:34.341 [2024-07-13 21:15:48.135681] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:34.341 [2024-07-13 21:15:48.135714] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:34.341 [2024-07-13 21:15:48.135783] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:34.341 [2024-07-13 21:15:48.135797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:34.341 [2024-07-13 21:15:48.135810] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:34.341 [2024-07-13 21:15:48.135822] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:34.341 [2024-07-13 21:15:48.135834] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:34.341 [2024-07-13 21:15:48.135849] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:34.341 [2024-07-13 21:15:48.135859] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:34.341 [2024-07-13 21:15:48.135868] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:34.341 [2024-07-13 21:15:48.135877] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:34.341 [2024-07-13 21:15:48.135903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.341 [2024-07-13 21:15:48.135913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:34.341 [2024-07-13 21:15:48.135924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:24:34.341 [2024-07-13 21:15:48.135934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.341 [2024-07-13 21:15:48.136011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.341 [2024-07-13 21:15:48.136026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:34.341 [2024-07-13 21:15:48.136040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:34.341 [2024-07-13 21:15:48.136050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.341 [2024-07-13 21:15:48.136124] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:34.341 [2024-07-13 21:15:48.136139] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:34.341 [2024-07-13 21:15:48.136150] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:34.341 [2024-07-13 21:15:48.136160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.341 [2024-07-13 21:15:48.136170] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:34.341 [2024-07-13 21:15:48.136179] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:34.341 [2024-07-13 21:15:48.136188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:34.341 [2024-07-13 21:15:48.136198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:34.341 [2024-07-13 21:15:48.136206] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:34.341 [2024-07-13 21:15:48.136215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:34.342 [2024-07-13 21:15:48.136224] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:34.342 [2024-07-13 21:15:48.136235] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:34.342 [2024-07-13 21:15:48.136244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:34.342 [2024-07-13 21:15:48.136253] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:34.342 [2024-07-13 21:15:48.136262] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:34.342 [2024-07-13 21:15:48.136271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.342 [2024-07-13 21:15:48.136279] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:34.342 [2024-07-13 21:15:48.136289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:34.342 [2024-07-13 21:15:48.136298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.342 [2024-07-13 21:15:48.136307] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:34.342 [2024-07-13 21:15:48.136316] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:34.342 [2024-07-13 21:15:48.136337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:34.342 [2024-07-13 21:15:48.136346] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:34.342 [2024-07-13 21:15:48.136355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:34.342 [2024-07-13 21:15:48.136364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:34.342 [2024-07-13 21:15:48.136373] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:34.342 [2024-07-13 21:15:48.136382] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:34.342 [2024-07-13 21:15:48.136391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:34.342 [2024-07-13 21:15:48.136399] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:34.342 [2024-07-13 21:15:48.136408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:34.342 [2024-07-13 21:15:48.136417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:34.342 [2024-07-13 21:15:48.136425] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:34.342 [2024-07-13 21:15:48.136434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:34.342 [2024-07-13 21:15:48.136443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:34.342 [2024-07-13 21:15:48.136452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:34.342 [2024-07-13 21:15:48.136460] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:34.342 [2024-07-13 21:15:48.136469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:34.342 [2024-07-13 21:15:48.136478] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:34.342 [2024-07-13 21:15:48.136487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:34.342 [2024-07-13 21:15:48.136496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:34.342 [2024-07-13 21:15:48.136504] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:34.342 [2024-07-13 21:15:48.136514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:34.342 [2024-07-13 21:15:48.136523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:34.342 [2024-07-13 21:15:48.136538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.342 [2024-07-13 21:15:48.136548] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:34.342 [2024-07-13 21:15:48.136558] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:34.342 [2024-07-13 21:15:48.136567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:34.342 [2024-07-13 21:15:48.136602] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:34.342 [2024-07-13 21:15:48.136628] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:34.342 [2024-07-13 21:15:48.136639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:34.342 [2024-07-13 21:15:48.136649] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:34.342 [2024-07-13 21:15:48.136663] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:34.342 [2024-07-13 21:15:48.136674] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:34.342 [2024-07-13 21:15:48.136685] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:34.342 [2024-07-13 21:15:48.136695] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:34.342 [2024-07-13 21:15:48.136720] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:34.342 [2024-07-13 21:15:48.136730] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:34.342 [2024-07-13 21:15:48.136756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:34.342 [2024-07-13 21:15:48.136766] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:34.342 [2024-07-13 21:15:48.136777] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:34.342 [2024-07-13 21:15:48.136787] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:34.342 [2024-07-13 21:15:48.136797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:34.342 [2024-07-13 21:15:48.136807] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:34.342 [2024-07-13 21:15:48.136818] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:34.342 [2024-07-13 21:15:48.136829] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:34.342 [2024-07-13 21:15:48.136839] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:34.342 [2024-07-13 21:15:48.136849] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:34.342 [2024-07-13 21:15:48.136861] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:34.342 [2024-07-13 21:15:48.136871] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:34.342 [2024-07-13 21:15:48.136882] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:34.342 [2024-07-13 21:15:48.136907] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:34.342 [2024-07-13 21:15:48.136933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.342 [2024-07-13 21:15:48.136944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:34.342 [2024-07-13 21:15:48.136954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:24:34.342 [2024-07-13 21:15:48.136965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.342 [2024-07-13 21:15:48.151805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.342 [2024-07-13 21:15:48.151853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:34.342 [2024-07-13 21:15:48.151886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.762 ms 00:24:34.342 [2024-07-13 21:15:48.151896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.342 [2024-07-13 21:15:48.151977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.342 [2024-07-13 21:15:48.151996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:34.342 [2024-07-13 21:15:48.152006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:34.342 [2024-07-13 21:15:48.152016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.342 [2024-07-13 21:15:48.187243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.342 [2024-07-13 21:15:48.187284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:34.342 [2024-07-13 21:15:48.187317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.137 ms 00:24:34.342 [2024-07-13 21:15:48.187331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.342 [2024-07-13 21:15:48.187379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.342 [2024-07-13 21:15:48.187394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:34.342 [2024-07-13 21:15:48.187405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:34.342 [2024-07-13 21:15:48.187415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.342 [2024-07-13 21:15:48.187781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.342 [2024-07-13 21:15:48.187809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:34.342 [2024-07-13 21:15:48.187822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:24:34.342 [2024-07-13 21:15:48.187832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.342 [2024-07-13 21:15:48.187991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.342 [2024-07-13 21:15:48.188027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:34.342 [2024-07-13 21:15:48.188040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:24:34.342 [2024-07-13 21:15:48.188050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.342 [2024-07-13 21:15:48.201860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.342 [2024-07-13 21:15:48.201895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:34.342 [2024-07-13 21:15:48.201927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.784 ms 00:24:34.342 [2024-07-13 21:15:48.201937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.342 [2024-07-13 21:15:48.214696] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:34.342 [2024-07-13 21:15:48.214747] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:34.342 [2024-07-13 21:15:48.214779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.342 [2024-07-13 21:15:48.214790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:34.342 [2024-07-13 21:15:48.214801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.723 ms 00:24:34.342 [2024-07-13 21:15:48.214810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.342 [2024-07-13 21:15:48.237977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.342 [2024-07-13 21:15:48.238012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:34.342 [2024-07-13 21:15:48.238042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.108 ms 00:24:34.342 [2024-07-13 21:15:48.238052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.342 [2024-07-13 21:15:48.250338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.342 [2024-07-13 21:15:48.250373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:34.342 [2024-07-13 21:15:48.250403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.244 ms 00:24:34.342 [2024-07-13 21:15:48.250413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.342 [2024-07-13 21:15:48.263166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.342 [2024-07-13 21:15:48.263202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:34.343 [2024-07-13 21:15:48.263216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.715 ms 00:24:34.343 [2024-07-13 21:15:48.263226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.343 [2024-07-13 21:15:48.263717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.343 [2024-07-13 21:15:48.263748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:34.343 [2024-07-13 21:15:48.263762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:24:34.343 [2024-07-13 21:15:48.263772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.601 [2024-07-13 21:15:48.329762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.601 [2024-07-13 21:15:48.329822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:34.601 [2024-07-13 21:15:48.329870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.965 ms 00:24:34.601 [2024-07-13 21:15:48.329884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.601 [2024-07-13 21:15:48.339585] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:34.601 [2024-07-13 21:15:48.341628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.601 [2024-07-13 21:15:48.341657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:34.601 [2024-07-13 21:15:48.341686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.678 ms 00:24:34.601 [2024-07-13 21:15:48.341696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.601 [2024-07-13 21:15:48.341779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.601 [2024-07-13 21:15:48.341806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:34.601 [2024-07-13 21:15:48.341817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:34.601 [2024-07-13 21:15:48.341827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.601 [2024-07-13 21:15:48.342859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.601 [2024-07-13 21:15:48.342911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:34.601 [2024-07-13 21:15:48.342924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:24:34.601 [2024-07-13 21:15:48.342934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.601 [2024-07-13 21:15:48.344731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.601 [2024-07-13 21:15:48.344781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:34.601 [2024-07-13 21:15:48.344799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.729 ms 00:24:34.601 [2024-07-13 21:15:48.344809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.601 [2024-07-13 21:15:48.344874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.601 [2024-07-13 21:15:48.344890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:34.601 [2024-07-13 21:15:48.344930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:34.601 [2024-07-13 21:15:48.344945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.601 [2024-07-13 21:15:48.344983] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:34.601 [2024-07-13 21:15:48.344998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.601 [2024-07-13 21:15:48.345007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:34.601 [2024-07-13 21:15:48.345018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:34.601 [2024-07-13 21:15:48.345030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.601 [2024-07-13 21:15:48.369649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.601 [2024-07-13 21:15:48.369685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:34.601 [2024-07-13 21:15:48.369716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.562 ms 00:24:34.602 [2024-07-13 21:15:48.369726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.602 [2024-07-13 21:15:48.369794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.602 [2024-07-13 21:15:48.369817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:34.602 [2024-07-13 21:15:48.369828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:34.602 [2024-07-13 21:15:48.369848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.602 [2024-07-13 21:15:48.374250] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.932 ms, result 0 00:25:13.470  Copying: 1184/1048576 [kB] (1184 kBps) Copying: 3412/1048576 [kB] (2228 kBps) Copying: 16/1024 [MB] (13 MBps) Copying: 45/1024 [MB] (28 MBps) Copying: 73/1024 [MB] (28 MBps) Copying: 102/1024 [MB] (28 MBps) Copying: 130/1024 [MB] (28 MBps) Copying: 159/1024 [MB] (28 MBps) Copying: 187/1024 [MB] (28 MBps) Copying: 215/1024 [MB] (27 MBps) Copying: 243/1024 [MB] (27 MBps) Copying: 271/1024 [MB] (28 MBps) Copying: 300/1024 [MB] (28 MBps) Copying: 328/1024 [MB] (27 MBps) Copying: 355/1024 [MB] (27 MBps) Copying: 383/1024 [MB] (27 MBps) Copying: 411/1024 [MB] (28 MBps) Copying: 440/1024 [MB] (28 MBps) Copying: 469/1024 [MB] (28 MBps) Copying: 498/1024 [MB] (29 MBps) Copying: 527/1024 [MB] (28 MBps) Copying: 556/1024 [MB] (29 MBps) Copying: 585/1024 [MB] (28 MBps) Copying: 614/1024 [MB] (28 MBps) Copying: 643/1024 [MB] (28 MBps) Copying: 671/1024 [MB] (28 MBps) Copying: 700/1024 [MB] (28 MBps) Copying: 728/1024 [MB] (28 MBps) Copying: 758/1024 [MB] (29 MBps) Copying: 788/1024 [MB] (30 MBps) Copying: 818/1024 [MB] (29 MBps) Copying: 847/1024 [MB] (29 MBps) Copying: 876/1024 [MB] (28 MBps) Copying: 905/1024 [MB] (29 MBps) Copying: 934/1024 [MB] (29 MBps) Copying: 963/1024 [MB] (28 MBps) Copying: 992/1024 [MB] (28 MBps) Copying: 1021/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-13 21:16:27.284348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.470 [2024-07-13 21:16:27.284444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:13.470 [2024-07-13 21:16:27.284489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:13.470 [2024-07-13 21:16:27.284508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.470 [2024-07-13 21:16:27.284551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:13.470 [2024-07-13 21:16:27.287629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.470 [2024-07-13 21:16:27.287674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:13.470 [2024-07-13 21:16:27.287704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.047 ms 00:25:13.470 [2024-07-13 21:16:27.287714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.470 [2024-07-13 21:16:27.287977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.470 [2024-07-13 21:16:27.287996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:13.470 [2024-07-13 21:16:27.288012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:25:13.470 [2024-07-13 21:16:27.288023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.470 [2024-07-13 21:16:27.301113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.470 [2024-07-13 21:16:27.301174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:13.470 [2024-07-13 21:16:27.301190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.069 ms 00:25:13.470 [2024-07-13 21:16:27.301201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.470 [2024-07-13 21:16:27.306366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.470 [2024-07-13 21:16:27.306396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:13.470 [2024-07-13 21:16:27.306425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.125 ms 00:25:13.470 [2024-07-13 21:16:27.306442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.470 [2024-07-13 21:16:27.330559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.470 [2024-07-13 21:16:27.330597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:13.470 [2024-07-13 21:16:27.330627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.051 ms 00:25:13.470 [2024-07-13 21:16:27.330637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.470 [2024-07-13 21:16:27.344644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.470 [2024-07-13 21:16:27.344697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:13.470 [2024-07-13 21:16:27.344727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.969 ms 00:25:13.470 [2024-07-13 21:16:27.344737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.470 [2024-07-13 21:16:27.347930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.470 [2024-07-13 21:16:27.347983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:13.470 [2024-07-13 21:16:27.348013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:25:13.470 [2024-07-13 21:16:27.348024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.470 [2024-07-13 21:16:27.372472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.470 [2024-07-13 21:16:27.372506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:13.470 [2024-07-13 21:16:27.372536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.421 ms 00:25:13.470 [2024-07-13 21:16:27.372546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.729 [2024-07-13 21:16:27.397699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.729 [2024-07-13 21:16:27.397768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:13.729 [2024-07-13 21:16:27.397798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.117 ms 00:25:13.729 [2024-07-13 21:16:27.397808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.729 [2024-07-13 21:16:27.421788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.729 [2024-07-13 21:16:27.421832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:13.729 [2024-07-13 21:16:27.421889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.918 ms 00:25:13.729 [2024-07-13 21:16:27.421899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.729 [2024-07-13 21:16:27.445525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.729 [2024-07-13 21:16:27.445560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:13.729 [2024-07-13 21:16:27.445590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.549 ms 00:25:13.729 [2024-07-13 21:16:27.445599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.729 [2024-07-13 21:16:27.445636] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:13.729 [2024-07-13 21:16:27.445657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:13.729 [2024-07-13 21:16:27.445669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:25:13.729 [2024-07-13 21:16:27.445679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.445991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:13.730 [2024-07-13 21:16:27.446714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:13.731 [2024-07-13 21:16:27.446724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:13.731 [2024-07-13 21:16:27.446741] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:13.731 [2024-07-13 21:16:27.446751] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d123a481-2c73-4fd0-aeba-33bdf8de3021 00:25:13.731 [2024-07-13 21:16:27.446761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:25:13.731 [2024-07-13 21:16:27.446771] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 182464 00:25:13.731 [2024-07-13 21:16:27.446780] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 180480 00:25:13.731 [2024-07-13 21:16:27.446797] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0110 00:25:13.731 [2024-07-13 21:16:27.446807] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:13.731 [2024-07-13 21:16:27.446817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:13.731 [2024-07-13 21:16:27.446827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:13.731 [2024-07-13 21:16:27.446836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:13.731 [2024-07-13 21:16:27.446845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:13.731 [2024-07-13 21:16:27.446855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.731 [2024-07-13 21:16:27.446876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:13.731 [2024-07-13 21:16:27.446889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:25:13.731 [2024-07-13 21:16:27.446899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.459938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.731 [2024-07-13 21:16:27.459968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:13.731 [2024-07-13 21:16:27.460004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.992 ms 00:25:13.731 [2024-07-13 21:16:27.460014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.460252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.731 [2024-07-13 21:16:27.460267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:13.731 [2024-07-13 21:16:27.460278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:25:13.731 [2024-07-13 21:16:27.460288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.499400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.731 [2024-07-13 21:16:27.499454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:13.731 [2024-07-13 21:16:27.499485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.731 [2024-07-13 21:16:27.499495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.499549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.731 [2024-07-13 21:16:27.499564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:13.731 [2024-07-13 21:16:27.499574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.731 [2024-07-13 21:16:27.499584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.499700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.731 [2024-07-13 21:16:27.499724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:13.731 [2024-07-13 21:16:27.499735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.731 [2024-07-13 21:16:27.499745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.499783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.731 [2024-07-13 21:16:27.499795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:13.731 [2024-07-13 21:16:27.499806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.731 [2024-07-13 21:16:27.499816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.577174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.731 [2024-07-13 21:16:27.577240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:13.731 [2024-07-13 21:16:27.577275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.731 [2024-07-13 21:16:27.577284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.608772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.731 [2024-07-13 21:16:27.608823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:13.731 [2024-07-13 21:16:27.608870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.731 [2024-07-13 21:16:27.608900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.608980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.731 [2024-07-13 21:16:27.609010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:13.731 [2024-07-13 21:16:27.609027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.731 [2024-07-13 21:16:27.609037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.609104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.731 [2024-07-13 21:16:27.609118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:13.731 [2024-07-13 21:16:27.609129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.731 [2024-07-13 21:16:27.609139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.609245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.731 [2024-07-13 21:16:27.609263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:13.731 [2024-07-13 21:16:27.609274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.731 [2024-07-13 21:16:27.609289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.609342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.731 [2024-07-13 21:16:27.609357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:13.731 [2024-07-13 21:16:27.609368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.731 [2024-07-13 21:16:27.609379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.609433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.731 [2024-07-13 21:16:27.609446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:13.731 [2024-07-13 21:16:27.609457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.731 [2024-07-13 21:16:27.609471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.609517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.731 [2024-07-13 21:16:27.609531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:13.731 [2024-07-13 21:16:27.609542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.731 [2024-07-13 21:16:27.609551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.731 [2024-07-13 21:16:27.609689] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 325.325 ms, result 0 00:25:14.665 00:25:14.665 00:25:14.665 21:16:28 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:16.567 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:16.567 21:16:30 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:16.567 [2024-07-13 21:16:30.343569] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:16.567 [2024-07-13 21:16:30.343732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77637 ] 00:25:16.826 [2024-07-13 21:16:30.517433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.826 [2024-07-13 21:16:30.701667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.084 [2024-07-13 21:16:30.951916] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:17.084 [2024-07-13 21:16:30.951998] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:17.345 [2024-07-13 21:16:31.100296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.345 [2024-07-13 21:16:31.100357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:17.345 [2024-07-13 21:16:31.100391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:17.345 [2024-07-13 21:16:31.100402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.345 [2024-07-13 21:16:31.100462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.345 [2024-07-13 21:16:31.100479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:17.345 [2024-07-13 21:16:31.100490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:17.345 [2024-07-13 21:16:31.100500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.345 [2024-07-13 21:16:31.100527] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:17.345 [2024-07-13 21:16:31.101447] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:17.345 [2024-07-13 21:16:31.101511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.345 [2024-07-13 21:16:31.101524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:17.345 [2024-07-13 21:16:31.101535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:25:17.345 [2024-07-13 21:16:31.101545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.345 [2024-07-13 21:16:31.102686] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:17.345 [2024-07-13 21:16:31.115423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.345 [2024-07-13 21:16:31.115461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:17.345 [2024-07-13 21:16:31.115480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.738 ms 00:25:17.345 [2024-07-13 21:16:31.115490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.345 [2024-07-13 21:16:31.115552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.345 [2024-07-13 21:16:31.115570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:17.345 [2024-07-13 21:16:31.115580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:17.345 [2024-07-13 21:16:31.115589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.345 [2024-07-13 21:16:31.119812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.345 [2024-07-13 21:16:31.119858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:17.345 [2024-07-13 21:16:31.119872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.170 ms 00:25:17.345 [2024-07-13 21:16:31.119882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.345 [2024-07-13 21:16:31.119974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.345 [2024-07-13 21:16:31.119991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:17.345 [2024-07-13 21:16:31.120001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:17.345 [2024-07-13 21:16:31.120010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.345 [2024-07-13 21:16:31.120070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.345 [2024-07-13 21:16:31.120089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:17.345 [2024-07-13 21:16:31.120100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:17.345 [2024-07-13 21:16:31.120108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.345 [2024-07-13 21:16:31.120140] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:17.345 [2024-07-13 21:16:31.123689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.345 [2024-07-13 21:16:31.123720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:17.345 [2024-07-13 21:16:31.123732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.559 ms 00:25:17.345 [2024-07-13 21:16:31.123741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.345 [2024-07-13 21:16:31.123775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.345 [2024-07-13 21:16:31.123788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:17.345 [2024-07-13 21:16:31.123798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:17.345 [2024-07-13 21:16:31.123807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.345 [2024-07-13 21:16:31.123830] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:17.345 [2024-07-13 21:16:31.123891] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:17.345 [2024-07-13 21:16:31.123925] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:17.345 [2024-07-13 21:16:31.123941] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:17.345 [2024-07-13 21:16:31.124006] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:17.345 [2024-07-13 21:16:31.124020] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:17.345 [2024-07-13 21:16:31.124031] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:17.345 [2024-07-13 21:16:31.124043] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:17.345 [2024-07-13 21:16:31.124054] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:17.345 [2024-07-13 21:16:31.124067] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:17.345 [2024-07-13 21:16:31.124076] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:17.345 [2024-07-13 21:16:31.124085] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:17.345 [2024-07-13 21:16:31.124093] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:17.345 [2024-07-13 21:16:31.124103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.345 [2024-07-13 21:16:31.124113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:17.345 [2024-07-13 21:16:31.124122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:25:17.345 [2024-07-13 21:16:31.124131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.345 [2024-07-13 21:16:31.124190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.345 [2024-07-13 21:16:31.124217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:17.345 [2024-07-13 21:16:31.124229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:17.345 [2024-07-13 21:16:31.124238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.345 [2024-07-13 21:16:31.124320] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:17.345 [2024-07-13 21:16:31.124335] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:17.345 [2024-07-13 21:16:31.124345] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:17.345 [2024-07-13 21:16:31.124354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.345 [2024-07-13 21:16:31.124363] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:17.345 [2024-07-13 21:16:31.124371] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:17.345 [2024-07-13 21:16:31.124380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:17.345 [2024-07-13 21:16:31.124389] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:17.345 [2024-07-13 21:16:31.124398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:17.345 [2024-07-13 21:16:31.124405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:17.345 [2024-07-13 21:16:31.124413] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:17.345 [2024-07-13 21:16:31.124424] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:17.345 [2024-07-13 21:16:31.124432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:17.345 [2024-07-13 21:16:31.124439] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:17.345 [2024-07-13 21:16:31.124448] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:17.345 [2024-07-13 21:16:31.124455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.345 [2024-07-13 21:16:31.124464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:17.345 [2024-07-13 21:16:31.124472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:17.345 [2024-07-13 21:16:31.124480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.345 [2024-07-13 21:16:31.124488] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:17.345 [2024-07-13 21:16:31.124497] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:17.345 [2024-07-13 21:16:31.124517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:17.345 [2024-07-13 21:16:31.124526] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:17.345 [2024-07-13 21:16:31.124534] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:17.345 [2024-07-13 21:16:31.124542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:17.345 [2024-07-13 21:16:31.124550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:17.345 [2024-07-13 21:16:31.124558] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:17.345 [2024-07-13 21:16:31.124566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:17.345 [2024-07-13 21:16:31.124574] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:17.345 [2024-07-13 21:16:31.124581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:17.345 [2024-07-13 21:16:31.124589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:17.345 [2024-07-13 21:16:31.124597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:17.345 [2024-07-13 21:16:31.124605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:17.345 [2024-07-13 21:16:31.124612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:17.346 [2024-07-13 21:16:31.124663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:17.346 [2024-07-13 21:16:31.124672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:17.346 [2024-07-13 21:16:31.124680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:17.346 [2024-07-13 21:16:31.124688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:17.346 [2024-07-13 21:16:31.124698] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:17.346 [2024-07-13 21:16:31.124706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:17.346 [2024-07-13 21:16:31.124714] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:17.346 [2024-07-13 21:16:31.124723] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:17.346 [2024-07-13 21:16:31.124732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:17.346 [2024-07-13 21:16:31.124747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.346 [2024-07-13 21:16:31.124757] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:17.346 [2024-07-13 21:16:31.124765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:17.346 [2024-07-13 21:16:31.124774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:17.346 [2024-07-13 21:16:31.124783] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:17.346 [2024-07-13 21:16:31.124792] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:17.346 [2024-07-13 21:16:31.124800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:17.346 [2024-07-13 21:16:31.124810] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:17.346 [2024-07-13 21:16:31.124821] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:17.346 [2024-07-13 21:16:31.124832] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:17.346 [2024-07-13 21:16:31.124841] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:17.346 [2024-07-13 21:16:31.124863] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:17.346 [2024-07-13 21:16:31.124875] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:17.346 [2024-07-13 21:16:31.124885] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:17.346 [2024-07-13 21:16:31.124894] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:17.346 [2024-07-13 21:16:31.124903] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:17.346 [2024-07-13 21:16:31.124912] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:17.346 [2024-07-13 21:16:31.124922] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:17.346 [2024-07-13 21:16:31.124932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:17.346 [2024-07-13 21:16:31.124941] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:17.346 [2024-07-13 21:16:31.124966] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:17.346 [2024-07-13 21:16:31.124976] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:17.346 [2024-07-13 21:16:31.124985] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:17.346 [2024-07-13 21:16:31.124995] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:17.346 [2024-07-13 21:16:31.125018] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:17.346 [2024-07-13 21:16:31.125027] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:17.346 [2024-07-13 21:16:31.125035] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:17.346 [2024-07-13 21:16:31.125045] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:17.346 [2024-07-13 21:16:31.125055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.125064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:17.346 [2024-07-13 21:16:31.125073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:25:17.346 [2024-07-13 21:16:31.125082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.346 [2024-07-13 21:16:31.139953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.139989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:17.346 [2024-07-13 21:16:31.140020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.807 ms 00:25:17.346 [2024-07-13 21:16:31.140030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.346 [2024-07-13 21:16:31.140112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.140130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:17.346 [2024-07-13 21:16:31.140141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:17.346 [2024-07-13 21:16:31.140150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.346 [2024-07-13 21:16:31.181748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.181795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:17.346 [2024-07-13 21:16:31.181842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.540 ms 00:25:17.346 [2024-07-13 21:16:31.181873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.346 [2024-07-13 21:16:31.181963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.181996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:17.346 [2024-07-13 21:16:31.182009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:17.346 [2024-07-13 21:16:31.182019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.346 [2024-07-13 21:16:31.182422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.182446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:17.346 [2024-07-13 21:16:31.182459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:25:17.346 [2024-07-13 21:16:31.182469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.346 [2024-07-13 21:16:31.182640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.182657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:17.346 [2024-07-13 21:16:31.182668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:25:17.346 [2024-07-13 21:16:31.182678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.346 [2024-07-13 21:16:31.197673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.197710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:17.346 [2024-07-13 21:16:31.197741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.971 ms 00:25:17.346 [2024-07-13 21:16:31.197750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.346 [2024-07-13 21:16:31.211373] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:17.346 [2024-07-13 21:16:31.211410] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:17.346 [2024-07-13 21:16:31.211442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.211452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:17.346 [2024-07-13 21:16:31.211463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.516 ms 00:25:17.346 [2024-07-13 21:16:31.211472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.346 [2024-07-13 21:16:31.235528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.235563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:17.346 [2024-07-13 21:16:31.235594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.016 ms 00:25:17.346 [2024-07-13 21:16:31.235603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.346 [2024-07-13 21:16:31.248287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.248322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:17.346 [2024-07-13 21:16:31.248352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.642 ms 00:25:17.346 [2024-07-13 21:16:31.248361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.346 [2024-07-13 21:16:31.261301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.261335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:17.346 [2024-07-13 21:16:31.261366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.902 ms 00:25:17.346 [2024-07-13 21:16:31.261375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.346 [2024-07-13 21:16:31.261760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.346 [2024-07-13 21:16:31.261781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:17.346 [2024-07-13 21:16:31.261793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:25:17.346 [2024-07-13 21:16:31.261803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.606 [2024-07-13 21:16:31.323560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.606 [2024-07-13 21:16:31.323613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:17.606 [2024-07-13 21:16:31.323647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.735 ms 00:25:17.606 [2024-07-13 21:16:31.323658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.606 [2024-07-13 21:16:31.333900] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:17.606 [2024-07-13 21:16:31.335936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.606 [2024-07-13 21:16:31.335964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:17.606 [2024-07-13 21:16:31.335993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.224 ms 00:25:17.606 [2024-07-13 21:16:31.336003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.606 [2024-07-13 21:16:31.336087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.606 [2024-07-13 21:16:31.336105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:17.606 [2024-07-13 21:16:31.336116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:17.606 [2024-07-13 21:16:31.336125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.606 [2024-07-13 21:16:31.336722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.606 [2024-07-13 21:16:31.336743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:17.606 [2024-07-13 21:16:31.336754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:25:17.606 [2024-07-13 21:16:31.336763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.606 [2024-07-13 21:16:31.338456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.606 [2024-07-13 21:16:31.338485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:17.606 [2024-07-13 21:16:31.338517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:25:17.606 [2024-07-13 21:16:31.338526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.606 [2024-07-13 21:16:31.338559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.606 [2024-07-13 21:16:31.338572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:17.606 [2024-07-13 21:16:31.338582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:17.606 [2024-07-13 21:16:31.338597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.606 [2024-07-13 21:16:31.338633] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:17.606 [2024-07-13 21:16:31.338648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.606 [2024-07-13 21:16:31.338657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:17.606 [2024-07-13 21:16:31.338667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:17.606 [2024-07-13 21:16:31.338679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.606 [2024-07-13 21:16:31.364167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.606 [2024-07-13 21:16:31.364205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:17.606 [2024-07-13 21:16:31.364252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.466 ms 00:25:17.606 [2024-07-13 21:16:31.364276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.606 [2024-07-13 21:16:31.364341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.606 [2024-07-13 21:16:31.364363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:17.606 [2024-07-13 21:16:31.364374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:17.606 [2024-07-13 21:16:31.364382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.606 [2024-07-13 21:16:31.365958] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 265.082 ms, result 0 00:26:02.940  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (22 MBps) Copying: 68/1024 [MB] (22 MBps) Copying: 91/1024 [MB] (22 MBps) Copying: 114/1024 [MB] (23 MBps) Copying: 136/1024 [MB] (21 MBps) Copying: 159/1024 [MB] (22 MBps) Copying: 181/1024 [MB] (22 MBps) Copying: 204/1024 [MB] (22 MBps) Copying: 226/1024 [MB] (22 MBps) Copying: 248/1024 [MB] (22 MBps) Copying: 271/1024 [MB] (22 MBps) Copying: 293/1024 [MB] (22 MBps) Copying: 315/1024 [MB] (22 MBps) Copying: 338/1024 [MB] (22 MBps) Copying: 360/1024 [MB] (22 MBps) Copying: 383/1024 [MB] (22 MBps) Copying: 406/1024 [MB] (22 MBps) Copying: 428/1024 [MB] (22 MBps) Copying: 451/1024 [MB] (22 MBps) Copying: 474/1024 [MB] (22 MBps) Copying: 496/1024 [MB] (22 MBps) Copying: 519/1024 [MB] (23 MBps) Copying: 542/1024 [MB] (22 MBps) Copying: 565/1024 [MB] (23 MBps) Copying: 588/1024 [MB] (23 MBps) Copying: 611/1024 [MB] (23 MBps) Copying: 635/1024 [MB] (23 MBps) Copying: 658/1024 [MB] (22 MBps) Copying: 681/1024 [MB] (22 MBps) Copying: 704/1024 [MB] (23 MBps) Copying: 727/1024 [MB] (22 MBps) Copying: 749/1024 [MB] (22 MBps) Copying: 773/1024 [MB] (23 MBps) Copying: 795/1024 [MB] (22 MBps) Copying: 817/1024 [MB] (22 MBps) Copying: 841/1024 [MB] (23 MBps) Copying: 865/1024 [MB] (23 MBps) Copying: 887/1024 [MB] (22 MBps) Copying: 910/1024 [MB] (22 MBps) Copying: 932/1024 [MB] (22 MBps) Copying: 955/1024 [MB] (22 MBps) Copying: 978/1024 [MB] (23 MBps) Copying: 1001/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-13 21:17:16.766683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.940 [2024-07-13 21:17:16.767064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:02.940 [2024-07-13 21:17:16.767250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:02.940 [2024-07-13 21:17:16.767327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.940 [2024-07-13 21:17:16.767492] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:02.940 [2024-07-13 21:17:16.771086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.940 [2024-07-13 21:17:16.771340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:02.940 [2024-07-13 21:17:16.771483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:26:02.940 [2024-07-13 21:17:16.771518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.940 [2024-07-13 21:17:16.771821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.940 [2024-07-13 21:17:16.771860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:02.940 [2024-07-13 21:17:16.771891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:26:02.940 [2024-07-13 21:17:16.772121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.940 [2024-07-13 21:17:16.775441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.940 [2024-07-13 21:17:16.775626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:02.940 [2024-07-13 21:17:16.775779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.253 ms 00:26:02.940 [2024-07-13 21:17:16.775821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.940 [2024-07-13 21:17:16.781743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.940 [2024-07-13 21:17:16.781786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:02.940 [2024-07-13 21:17:16.781800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.830 ms 00:26:02.940 [2024-07-13 21:17:16.781810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.940 [2024-07-13 21:17:16.807406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.940 [2024-07-13 21:17:16.807447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:02.940 [2024-07-13 21:17:16.807464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.465 ms 00:26:02.940 [2024-07-13 21:17:16.807474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.940 [2024-07-13 21:17:16.822882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.940 [2024-07-13 21:17:16.822960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:02.940 [2024-07-13 21:17:16.822993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.367 ms 00:26:02.940 [2024-07-13 21:17:16.823004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.940 [2024-07-13 21:17:16.826951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.940 [2024-07-13 21:17:16.827000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:02.940 [2024-07-13 21:17:16.827032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.899 ms 00:26:02.940 [2024-07-13 21:17:16.827043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.940 [2024-07-13 21:17:16.852263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.940 [2024-07-13 21:17:16.852301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:02.940 [2024-07-13 21:17:16.852331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.199 ms 00:26:02.940 [2024-07-13 21:17:16.852341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.201 [2024-07-13 21:17:16.878210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.201 [2024-07-13 21:17:16.878372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:03.201 [2024-07-13 21:17:16.878532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.830 ms 00:26:03.201 [2024-07-13 21:17:16.878641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.201 [2024-07-13 21:17:16.902661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.201 [2024-07-13 21:17:16.902825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:03.201 [2024-07-13 21:17:16.902997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.945 ms 00:26:03.201 [2024-07-13 21:17:16.903020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.201 [2024-07-13 21:17:16.927056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.201 [2024-07-13 21:17:16.927101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:03.201 [2024-07-13 21:17:16.927116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.952 ms 00:26:03.201 [2024-07-13 21:17:16.927125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.201 [2024-07-13 21:17:16.927161] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:03.201 [2024-07-13 21:17:16.927182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:03.201 [2024-07-13 21:17:16.927193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:26:03.201 [2024-07-13 21:17:16.927203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:03.201 [2024-07-13 21:17:16.927951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.927962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.927987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.927996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:03.202 [2024-07-13 21:17:16.928188] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:03.202 [2024-07-13 21:17:16.928198] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d123a481-2c73-4fd0-aeba-33bdf8de3021 00:26:03.202 [2024-07-13 21:17:16.928214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:26:03.202 [2024-07-13 21:17:16.928223] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:03.202 [2024-07-13 21:17:16.928248] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:03.202 [2024-07-13 21:17:16.928273] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:03.202 [2024-07-13 21:17:16.928282] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:03.202 [2024-07-13 21:17:16.928306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:03.202 [2024-07-13 21:17:16.928331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:03.202 [2024-07-13 21:17:16.928355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:03.202 [2024-07-13 21:17:16.928364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:03.202 [2024-07-13 21:17:16.928374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.202 [2024-07-13 21:17:16.928383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:03.202 [2024-07-13 21:17:16.928393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.213 ms 00:26:03.202 [2024-07-13 21:17:16.928414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:16.941697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.202 [2024-07-13 21:17:16.941728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:03.202 [2024-07-13 21:17:16.941742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.243 ms 00:26:03.202 [2024-07-13 21:17:16.941751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:16.942033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.202 [2024-07-13 21:17:16.942051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:03.202 [2024-07-13 21:17:16.942068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:26:03.202 [2024-07-13 21:17:16.942078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:16.977240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.202 [2024-07-13 21:17:16.977277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.202 [2024-07-13 21:17:16.977290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.202 [2024-07-13 21:17:16.977299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:16.977346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.202 [2024-07-13 21:17:16.977358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.202 [2024-07-13 21:17:16.977373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.202 [2024-07-13 21:17:16.977382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:16.977452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.202 [2024-07-13 21:17:16.977469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.202 [2024-07-13 21:17:16.977478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.202 [2024-07-13 21:17:16.977487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:16.977505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.202 [2024-07-13 21:17:16.977525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.202 [2024-07-13 21:17:16.977534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.202 [2024-07-13 21:17:16.977548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:17.051898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.202 [2024-07-13 21:17:17.051958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.202 [2024-07-13 21:17:17.051974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.202 [2024-07-13 21:17:17.051982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:17.082053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.202 [2024-07-13 21:17:17.082087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.202 [2024-07-13 21:17:17.082101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.202 [2024-07-13 21:17:17.082116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:17.082187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.202 [2024-07-13 21:17:17.082202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:03.202 [2024-07-13 21:17:17.082212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.202 [2024-07-13 21:17:17.082221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:17.082263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.202 [2024-07-13 21:17:17.082276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:03.202 [2024-07-13 21:17:17.082285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.202 [2024-07-13 21:17:17.082294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:17.082391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.202 [2024-07-13 21:17:17.082407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:03.202 [2024-07-13 21:17:17.082417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.202 [2024-07-13 21:17:17.082426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:17.082468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.202 [2024-07-13 21:17:17.082483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:03.202 [2024-07-13 21:17:17.082492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.202 [2024-07-13 21:17:17.082502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:17.082544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.202 [2024-07-13 21:17:17.082557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:03.202 [2024-07-13 21:17:17.082566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.202 [2024-07-13 21:17:17.082575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:17.082618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.202 [2024-07-13 21:17:17.082632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:03.202 [2024-07-13 21:17:17.082641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.202 [2024-07-13 21:17:17.082650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.202 [2024-07-13 21:17:17.082782] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 316.085 ms, result 0 00:26:04.140 00:26:04.140 00:26:04.140 21:17:17 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:06.045 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:06.045 21:17:19 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:06.045 21:17:19 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:06.045 21:17:19 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:06.045 21:17:19 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:06.045 21:17:19 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:06.304 21:17:20 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:06.304 21:17:20 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:06.304 Process with pid 75687 is not found 00:26:06.304 21:17:20 -- ftl/dirty_shutdown.sh@37 -- # killprocess 75687 00:26:06.304 21:17:20 -- common/autotest_common.sh@926 -- # '[' -z 75687 ']' 00:26:06.304 21:17:20 -- common/autotest_common.sh@930 -- # kill -0 75687 00:26:06.304 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (75687) - No such process 00:26:06.304 21:17:20 -- common/autotest_common.sh@953 -- # echo 'Process with pid 75687 is not found' 00:26:06.304 21:17:20 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:06.563 Remove shared memory files 00:26:06.563 21:17:20 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:06.563 21:17:20 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:06.563 21:17:20 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:06.563 21:17:20 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:06.563 21:17:20 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:06.563 21:17:20 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:06.563 21:17:20 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:06.563 00:26:06.563 real 3m57.718s 00:26:06.563 user 4m35.637s 00:26:06.563 sys 0m33.507s 00:26:06.563 21:17:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.563 21:17:20 -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 ************************************ 00:26:06.563 END TEST ftl_dirty_shutdown 00:26:06.563 ************************************ 00:26:06.563 21:17:20 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:06.563 21:17:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:26:06.563 21:17:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:06.563 21:17:20 -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 ************************************ 00:26:06.563 START TEST ftl_upgrade_shutdown 00:26:06.563 ************************************ 00:26:06.563 21:17:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:06.563 * Looking for test storage... 00:26:06.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:06.823 21:17:20 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:06.823 21:17:20 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:06.823 21:17:20 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:06.823 21:17:20 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:06.823 21:17:20 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:06.823 21:17:20 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:06.823 21:17:20 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:06.823 21:17:20 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:06.823 21:17:20 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:06.823 21:17:20 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:06.823 21:17:20 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:06.823 21:17:20 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:06.823 21:17:20 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:06.823 21:17:20 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:06.823 21:17:20 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:06.823 21:17:20 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:06.823 21:17:20 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:06.823 21:17:20 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:06.823 21:17:20 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:06.823 21:17:20 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:06.823 21:17:20 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:06.823 21:17:20 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:06.823 21:17:20 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:06.823 21:17:20 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:06.823 21:17:20 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:06.823 21:17:20 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:06.823 21:17:20 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.823 21:17:20 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:06.823 21:17:20 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:06.823 21:17:20 -- ftl/common.sh@81 -- # local base_bdev= 00:26:06.823 21:17:20 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:06.823 21:17:20 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:06.823 21:17:20 -- ftl/common.sh@89 -- # spdk_tgt_pid=78191 00:26:06.823 21:17:20 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:06.823 21:17:20 -- ftl/common.sh@91 -- # waitforlisten 78191 00:26:06.823 21:17:20 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:06.823 21:17:20 -- common/autotest_common.sh@819 -- # '[' -z 78191 ']' 00:26:06.823 21:17:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.823 21:17:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:06.823 21:17:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.823 21:17:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:06.823 21:17:20 -- common/autotest_common.sh@10 -- # set +x 00:26:06.823 [2024-07-13 21:17:20.629356] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:06.823 [2024-07-13 21:17:20.629523] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78191 ] 00:26:07.082 [2024-07-13 21:17:20.799576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.082 [2024-07-13 21:17:20.992324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:07.082 [2024-07-13 21:17:20.992536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.495 21:17:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:08.495 21:17:22 -- common/autotest_common.sh@852 -- # return 0 00:26:08.495 21:17:22 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:08.495 21:17:22 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:08.495 21:17:22 -- ftl/common.sh@99 -- # local params 00:26:08.495 21:17:22 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:08.495 21:17:22 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:08.495 21:17:22 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:08.495 21:17:22 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:26:08.495 21:17:22 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:08.495 21:17:22 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:08.495 21:17:22 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:08.495 21:17:22 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:26:08.495 21:17:22 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:08.495 21:17:22 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:08.495 21:17:22 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:08.495 21:17:22 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:08.495 21:17:22 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:26:08.495 21:17:22 -- ftl/common.sh@54 -- # local name=base 00:26:08.495 21:17:22 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:26:08.495 21:17:22 -- ftl/common.sh@56 -- # local size=20480 00:26:08.495 21:17:22 -- ftl/common.sh@59 -- # local base_bdev 00:26:08.495 21:17:22 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:26:08.765 21:17:22 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:08.765 21:17:22 -- ftl/common.sh@62 -- # local base_size 00:26:08.765 21:17:22 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:08.765 21:17:22 -- common/autotest_common.sh@1357 -- # local bdev_name=basen1 00:26:08.765 21:17:22 -- common/autotest_common.sh@1358 -- # local bdev_info 00:26:08.765 21:17:22 -- common/autotest_common.sh@1359 -- # local bs 00:26:08.765 21:17:22 -- common/autotest_common.sh@1360 -- # local nb 00:26:08.765 21:17:22 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:08.765 21:17:22 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:26:08.765 { 00:26:08.765 "name": "basen1", 00:26:08.765 "aliases": [ 00:26:08.765 "f18dcac7-e8e5-4c1e-9013-162617f3b51d" 00:26:08.765 ], 00:26:08.765 "product_name": "NVMe disk", 00:26:08.765 "block_size": 4096, 00:26:08.765 "num_blocks": 1310720, 00:26:08.765 "uuid": "f18dcac7-e8e5-4c1e-9013-162617f3b51d", 00:26:08.765 "assigned_rate_limits": { 00:26:08.765 "rw_ios_per_sec": 0, 00:26:08.765 "rw_mbytes_per_sec": 0, 00:26:08.765 "r_mbytes_per_sec": 0, 00:26:08.765 "w_mbytes_per_sec": 0 00:26:08.765 }, 00:26:08.765 "claimed": true, 00:26:08.765 "claim_type": "read_many_write_one", 00:26:08.765 "zoned": false, 00:26:08.765 "supported_io_types": { 00:26:08.765 "read": true, 00:26:08.765 "write": true, 00:26:08.765 "unmap": true, 00:26:08.765 "write_zeroes": true, 00:26:08.765 "flush": true, 00:26:08.765 "reset": true, 00:26:08.765 "compare": true, 00:26:08.765 "compare_and_write": false, 00:26:08.765 "abort": true, 00:26:08.765 "nvme_admin": true, 00:26:08.765 "nvme_io": true 00:26:08.765 }, 00:26:08.765 "driver_specific": { 00:26:08.765 "nvme": [ 00:26:08.765 { 00:26:08.765 "pci_address": "0000:00:07.0", 00:26:08.765 "trid": { 00:26:08.765 "trtype": "PCIe", 00:26:08.765 "traddr": "0000:00:07.0" 00:26:08.765 }, 00:26:08.765 "ctrlr_data": { 00:26:08.765 "cntlid": 0, 00:26:08.765 "vendor_id": "0x1b36", 00:26:08.765 "model_number": "QEMU NVMe Ctrl", 00:26:08.765 "serial_number": "12341", 00:26:08.765 "firmware_revision": "8.0.0", 00:26:08.765 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:08.765 "oacs": { 00:26:08.765 "security": 0, 00:26:08.765 "format": 1, 00:26:08.765 "firmware": 0, 00:26:08.765 "ns_manage": 1 00:26:08.765 }, 00:26:08.765 "multi_ctrlr": false, 00:26:08.765 "ana_reporting": false 00:26:08.765 }, 00:26:08.765 "vs": { 00:26:08.765 "nvme_version": "1.4" 00:26:08.765 }, 00:26:08.765 "ns_data": { 00:26:08.765 "id": 1, 00:26:08.765 "can_share": false 00:26:08.765 } 00:26:08.765 } 00:26:08.765 ], 00:26:08.765 "mp_policy": "active_passive" 00:26:08.765 } 00:26:08.765 } 00:26:08.765 ]' 00:26:08.765 21:17:22 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:26:09.024 21:17:22 -- common/autotest_common.sh@1362 -- # bs=4096 00:26:09.024 21:17:22 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:26:09.024 21:17:22 -- common/autotest_common.sh@1363 -- # nb=1310720 00:26:09.024 21:17:22 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:26:09.024 21:17:22 -- common/autotest_common.sh@1367 -- # echo 5120 00:26:09.024 21:17:22 -- ftl/common.sh@63 -- # base_size=5120 00:26:09.024 21:17:22 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:09.024 21:17:22 -- ftl/common.sh@67 -- # clear_lvols 00:26:09.024 21:17:22 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:09.024 21:17:22 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:09.024 21:17:22 -- ftl/common.sh@28 -- # stores=9eeba675-a7ae-434d-851a-df33acc583f2 00:26:09.024 21:17:22 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:09.024 21:17:22 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9eeba675-a7ae-434d-851a-df33acc583f2 00:26:09.284 21:17:23 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:09.542 21:17:23 -- ftl/common.sh@68 -- # lvs=515e5bcf-86e4-47db-96fc-4c09c3c960a6 00:26:09.542 21:17:23 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 515e5bcf-86e4-47db-96fc-4c09c3c960a6 00:26:09.801 21:17:23 -- ftl/common.sh@107 -- # base_bdev=bef53414-392f-46d7-9aea-7c7fd0c8c169 00:26:09.801 21:17:23 -- ftl/common.sh@108 -- # [[ -z bef53414-392f-46d7-9aea-7c7fd0c8c169 ]] 00:26:09.801 21:17:23 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 bef53414-392f-46d7-9aea-7c7fd0c8c169 5120 00:26:09.801 21:17:23 -- ftl/common.sh@35 -- # local name=cache 00:26:09.801 21:17:23 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:09.801 21:17:23 -- ftl/common.sh@37 -- # local base_bdev=bef53414-392f-46d7-9aea-7c7fd0c8c169 00:26:09.801 21:17:23 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:09.801 21:17:23 -- ftl/common.sh@41 -- # get_bdev_size bef53414-392f-46d7-9aea-7c7fd0c8c169 00:26:09.801 21:17:23 -- common/autotest_common.sh@1357 -- # local bdev_name=bef53414-392f-46d7-9aea-7c7fd0c8c169 00:26:09.801 21:17:23 -- common/autotest_common.sh@1358 -- # local bdev_info 00:26:09.801 21:17:23 -- common/autotest_common.sh@1359 -- # local bs 00:26:09.801 21:17:23 -- common/autotest_common.sh@1360 -- # local nb 00:26:09.801 21:17:23 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bef53414-392f-46d7-9aea-7c7fd0c8c169 00:26:10.059 21:17:23 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:26:10.059 { 00:26:10.059 "name": "bef53414-392f-46d7-9aea-7c7fd0c8c169", 00:26:10.059 "aliases": [ 00:26:10.059 "lvs/basen1p0" 00:26:10.059 ], 00:26:10.059 "product_name": "Logical Volume", 00:26:10.059 "block_size": 4096, 00:26:10.059 "num_blocks": 5242880, 00:26:10.059 "uuid": "bef53414-392f-46d7-9aea-7c7fd0c8c169", 00:26:10.059 "assigned_rate_limits": { 00:26:10.059 "rw_ios_per_sec": 0, 00:26:10.059 "rw_mbytes_per_sec": 0, 00:26:10.059 "r_mbytes_per_sec": 0, 00:26:10.059 "w_mbytes_per_sec": 0 00:26:10.059 }, 00:26:10.059 "claimed": false, 00:26:10.059 "zoned": false, 00:26:10.059 "supported_io_types": { 00:26:10.059 "read": true, 00:26:10.059 "write": true, 00:26:10.059 "unmap": true, 00:26:10.059 "write_zeroes": true, 00:26:10.059 "flush": false, 00:26:10.059 "reset": true, 00:26:10.059 "compare": false, 00:26:10.059 "compare_and_write": false, 00:26:10.059 "abort": false, 00:26:10.059 "nvme_admin": false, 00:26:10.059 "nvme_io": false 00:26:10.059 }, 00:26:10.059 "driver_specific": { 00:26:10.059 "lvol": { 00:26:10.059 "lvol_store_uuid": "515e5bcf-86e4-47db-96fc-4c09c3c960a6", 00:26:10.059 "base_bdev": "basen1", 00:26:10.059 "thin_provision": true, 00:26:10.059 "snapshot": false, 00:26:10.059 "clone": false, 00:26:10.059 "esnap_clone": false 00:26:10.059 } 00:26:10.059 } 00:26:10.059 } 00:26:10.059 ]' 00:26:10.059 21:17:23 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:26:10.059 21:17:23 -- common/autotest_common.sh@1362 -- # bs=4096 00:26:10.059 21:17:23 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:26:10.059 21:17:23 -- common/autotest_common.sh@1363 -- # nb=5242880 00:26:10.059 21:17:23 -- common/autotest_common.sh@1366 -- # bdev_size=20480 00:26:10.059 21:17:23 -- common/autotest_common.sh@1367 -- # echo 20480 00:26:10.059 21:17:23 -- ftl/common.sh@41 -- # local base_size=1024 00:26:10.059 21:17:23 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:10.059 21:17:23 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:10.318 21:17:24 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:10.318 21:17:24 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:10.318 21:17:24 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:10.577 21:17:24 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:10.577 21:17:24 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:10.577 21:17:24 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d bef53414-392f-46d7-9aea-7c7fd0c8c169 -c cachen1p0 --l2p_dram_limit 2 00:26:10.837 [2024-07-13 21:17:24.583160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.837 [2024-07-13 21:17:24.583224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:10.837 [2024-07-13 21:17:24.583277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:10.837 [2024-07-13 21:17:24.583297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.837 [2024-07-13 21:17:24.583361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.837 [2024-07-13 21:17:24.583377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:10.837 [2024-07-13 21:17:24.583390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:26:10.837 [2024-07-13 21:17:24.583400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.837 [2024-07-13 21:17:24.583427] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:10.837 [2024-07-13 21:17:24.584318] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:10.837 [2024-07-13 21:17:24.584390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.837 [2024-07-13 21:17:24.584403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:10.837 [2024-07-13 21:17:24.584418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.966 ms 00:26:10.837 [2024-07-13 21:17:24.584428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.837 [2024-07-13 21:17:24.584545] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 5ad24337-5379-4549-8ed3-22af24313bce 00:26:10.837 [2024-07-13 21:17:24.585580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.837 [2024-07-13 21:17:24.585633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:10.837 [2024-07-13 21:17:24.585663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:10.837 [2024-07-13 21:17:24.585674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.837 [2024-07-13 21:17:24.589870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.837 [2024-07-13 21:17:24.589949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:10.837 [2024-07-13 21:17:24.589963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.146 ms 00:26:10.837 [2024-07-13 21:17:24.589974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.837 [2024-07-13 21:17:24.590026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.837 [2024-07-13 21:17:24.590044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:10.837 [2024-07-13 21:17:24.590055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:10.837 [2024-07-13 21:17:24.590069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.837 [2024-07-13 21:17:24.590131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.837 [2024-07-13 21:17:24.590150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:10.837 [2024-07-13 21:17:24.590161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:10.837 [2024-07-13 21:17:24.590191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.837 [2024-07-13 21:17:24.590256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:10.837 [2024-07-13 21:17:24.594000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.837 [2024-07-13 21:17:24.594034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:10.837 [2024-07-13 21:17:24.594066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.754 ms 00:26:10.837 [2024-07-13 21:17:24.594076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.837 [2024-07-13 21:17:24.594111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.837 [2024-07-13 21:17:24.594124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:10.837 [2024-07-13 21:17:24.594136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:10.837 [2024-07-13 21:17:24.594146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.837 [2024-07-13 21:17:24.594194] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:10.837 [2024-07-13 21:17:24.594357] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:10.837 [2024-07-13 21:17:24.594379] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:10.837 [2024-07-13 21:17:24.594393] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:10.837 [2024-07-13 21:17:24.594408] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:10.837 [2024-07-13 21:17:24.594433] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:10.837 [2024-07-13 21:17:24.594446] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:10.837 [2024-07-13 21:17:24.594456] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:10.837 [2024-07-13 21:17:24.594468] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:10.837 [2024-07-13 21:17:24.594481] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:10.837 [2024-07-13 21:17:24.594494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.837 [2024-07-13 21:17:24.594505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:10.838 [2024-07-13 21:17:24.594517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:26:10.838 [2024-07-13 21:17:24.594528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.838 [2024-07-13 21:17:24.594595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.838 [2024-07-13 21:17:24.594609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:10.838 [2024-07-13 21:17:24.594634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:26:10.838 [2024-07-13 21:17:24.594644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.838 [2024-07-13 21:17:24.594725] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:10.838 [2024-07-13 21:17:24.594748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:10.838 [2024-07-13 21:17:24.594763] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:10.838 [2024-07-13 21:17:24.594774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:10.838 [2024-07-13 21:17:24.594787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:10.838 [2024-07-13 21:17:24.594797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:10.838 [2024-07-13 21:17:24.594809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:10.838 [2024-07-13 21:17:24.594819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:10.838 [2024-07-13 21:17:24.594830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:10.838 [2024-07-13 21:17:24.594855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:10.838 [2024-07-13 21:17:24.594868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:10.838 [2024-07-13 21:17:24.594878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:10.838 [2024-07-13 21:17:24.594891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:10.838 [2024-07-13 21:17:24.594901] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:10.838 [2024-07-13 21:17:24.594913] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:10.838 [2024-07-13 21:17:24.594923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:10.838 [2024-07-13 21:17:24.594935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:10.838 [2024-07-13 21:17:24.594945] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:10.838 [2024-07-13 21:17:24.594956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:10.838 [2024-07-13 21:17:24.594966] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:10.838 [2024-07-13 21:17:24.594978] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:10.838 [2024-07-13 21:17:24.594988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:10.838 [2024-07-13 21:17:24.595000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:10.838 [2024-07-13 21:17:24.595009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:10.838 [2024-07-13 21:17:24.595020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:10.838 [2024-07-13 21:17:24.595030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:10.838 [2024-07-13 21:17:24.595041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:10.838 [2024-07-13 21:17:24.595050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:10.838 [2024-07-13 21:17:24.595061] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:10.838 [2024-07-13 21:17:24.595071] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:10.838 [2024-07-13 21:17:24.595083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:10.838 [2024-07-13 21:17:24.595092] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:10.838 [2024-07-13 21:17:24.595105] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:10.838 [2024-07-13 21:17:24.595115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:10.838 [2024-07-13 21:17:24.595126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:10.838 [2024-07-13 21:17:24.595136] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:10.838 [2024-07-13 21:17:24.595147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:10.838 [2024-07-13 21:17:24.595156] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:10.838 [2024-07-13 21:17:24.595168] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:10.838 [2024-07-13 21:17:24.595178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:10.838 [2024-07-13 21:17:24.595189] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:10.838 [2024-07-13 21:17:24.595199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:10.838 [2024-07-13 21:17:24.595211] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:10.838 [2024-07-13 21:17:24.595221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:10.838 [2024-07-13 21:17:24.595233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:10.838 [2024-07-13 21:17:24.595243] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:10.838 [2024-07-13 21:17:24.595254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:10.838 [2024-07-13 21:17:24.595264] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:10.838 [2024-07-13 21:17:24.595276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:10.838 [2024-07-13 21:17:24.595286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:10.838 [2024-07-13 21:17:24.595299] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:10.838 [2024-07-13 21:17:24.595312] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.838 [2024-07-13 21:17:24.595330] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:10.838 [2024-07-13 21:17:24.595341] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:10.838 [2024-07-13 21:17:24.595353] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:10.838 [2024-07-13 21:17:24.595363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:10.838 [2024-07-13 21:17:24.595375] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:10.838 [2024-07-13 21:17:24.595386] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:10.838 [2024-07-13 21:17:24.595397] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:10.838 [2024-07-13 21:17:24.595408] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:10.838 [2024-07-13 21:17:24.595420] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:10.838 [2024-07-13 21:17:24.595430] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:10.838 [2024-07-13 21:17:24.595443] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:10.838 [2024-07-13 21:17:24.595453] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:10.838 [2024-07-13 21:17:24.595469] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:10.838 [2024-07-13 21:17:24.595480] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:10.838 [2024-07-13 21:17:24.595493] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.838 [2024-07-13 21:17:24.595504] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:10.838 [2024-07-13 21:17:24.595516] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:10.838 [2024-07-13 21:17:24.595527] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:10.838 [2024-07-13 21:17:24.595539] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:10.838 [2024-07-13 21:17:24.595550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.838 [2024-07-13 21:17:24.595563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:10.838 [2024-07-13 21:17:24.595574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.869 ms 00:26:10.838 [2024-07-13 21:17:24.595587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.838 [2024-07-13 21:17:24.610495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.838 [2024-07-13 21:17:24.610552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:10.838 [2024-07-13 21:17:24.610585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.858 ms 00:26:10.838 [2024-07-13 21:17:24.610597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.838 [2024-07-13 21:17:24.610643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.838 [2024-07-13 21:17:24.610661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:10.838 [2024-07-13 21:17:24.610673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:10.838 [2024-07-13 21:17:24.610684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.838 [2024-07-13 21:17:24.641292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.838 [2024-07-13 21:17:24.641336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:10.838 [2024-07-13 21:17:24.641367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.551 ms 00:26:10.838 [2024-07-13 21:17:24.641379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.838 [2024-07-13 21:17:24.641419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.838 [2024-07-13 21:17:24.641438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:10.838 [2024-07-13 21:17:24.641449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:10.838 [2024-07-13 21:17:24.641461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.838 [2024-07-13 21:17:24.641846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.838 [2024-07-13 21:17:24.641889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:10.838 [2024-07-13 21:17:24.641905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:26:10.838 [2024-07-13 21:17:24.641917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.838 [2024-07-13 21:17:24.641963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.838 [2024-07-13 21:17:24.641993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:10.838 [2024-07-13 21:17:24.642004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:10.838 [2024-07-13 21:17:24.642017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.838 [2024-07-13 21:17:24.656766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.838 [2024-07-13 21:17:24.656821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:10.838 [2024-07-13 21:17:24.656863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.726 ms 00:26:10.838 [2024-07-13 21:17:24.656879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.839 [2024-07-13 21:17:24.667759] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:10.839 [2024-07-13 21:17:24.668782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.839 [2024-07-13 21:17:24.668877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:10.839 [2024-07-13 21:17:24.668899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.785 ms 00:26:10.839 [2024-07-13 21:17:24.668912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.839 [2024-07-13 21:17:24.701250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.839 [2024-07-13 21:17:24.701290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:10.839 [2024-07-13 21:17:24.701324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 32.299 ms 00:26:10.839 [2024-07-13 21:17:24.701335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.839 [2024-07-13 21:17:24.701386] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:10.839 [2024-07-13 21:17:24.701404] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:15.030 [2024-07-13 21:17:28.392732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.030 [2024-07-13 21:17:28.392821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:15.031 [2024-07-13 21:17:28.392871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3691.364 ms 00:26:15.031 [2024-07-13 21:17:28.392895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.031 [2024-07-13 21:17:28.393048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.031 [2024-07-13 21:17:28.393066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:15.031 [2024-07-13 21:17:28.393079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:26:15.031 [2024-07-13 21:17:28.393089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.031 [2024-07-13 21:17:28.417556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.031 [2024-07-13 21:17:28.417593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:15.031 [2024-07-13 21:17:28.417626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.389 ms 00:26:15.031 [2024-07-13 21:17:28.417638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.031 [2024-07-13 21:17:28.441784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.031 [2024-07-13 21:17:28.441820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:15.031 [2024-07-13 21:17:28.441898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.100 ms 00:26:15.031 [2024-07-13 21:17:28.441912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.031 [2024-07-13 21:17:28.442317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.031 [2024-07-13 21:17:28.442346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:15.031 [2024-07-13 21:17:28.442373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.360 ms 00:26:15.031 [2024-07-13 21:17:28.442383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.031 [2024-07-13 21:17:28.516667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.031 [2024-07-13 21:17:28.516736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:15.031 [2024-07-13 21:17:28.516771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 74.232 ms 00:26:15.031 [2024-07-13 21:17:28.516782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.031 [2024-07-13 21:17:28.541881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.031 [2024-07-13 21:17:28.541923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:15.031 [2024-07-13 21:17:28.541958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.051 ms 00:26:15.031 [2024-07-13 21:17:28.541971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.031 [2024-07-13 21:17:28.543535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.031 [2024-07-13 21:17:28.543567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:15.031 [2024-07-13 21:17:28.543601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.519 ms 00:26:15.031 [2024-07-13 21:17:28.543612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.031 [2024-07-13 21:17:28.568074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.031 [2024-07-13 21:17:28.568127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:15.031 [2024-07-13 21:17:28.568161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.418 ms 00:26:15.031 [2024-07-13 21:17:28.568172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.031 [2024-07-13 21:17:28.568223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.031 [2024-07-13 21:17:28.568255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:15.031 [2024-07-13 21:17:28.568268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:15.031 [2024-07-13 21:17:28.568278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.031 [2024-07-13 21:17:28.568364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.031 [2024-07-13 21:17:28.568412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:15.031 [2024-07-13 21:17:28.568443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:26:15.031 [2024-07-13 21:17:28.568454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.031 [2024-07-13 21:17:28.569521] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3985.860 ms, result 0 00:26:15.031 { 00:26:15.031 "name": "ftl", 00:26:15.031 "uuid": "5ad24337-5379-4549-8ed3-22af24313bce" 00:26:15.031 } 00:26:15.031 21:17:28 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:15.031 [2024-07-13 21:17:28.808655] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.031 21:17:28 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:15.290 21:17:29 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:15.548 [2024-07-13 21:17:29.281136] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:15.548 21:17:29 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:15.807 [2024-07-13 21:17:29.529632] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:15.807 21:17:29 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:16.066 21:17:29 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:16.066 21:17:29 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:16.066 21:17:29 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:16.066 21:17:29 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:16.066 21:17:29 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:16.066 21:17:29 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:16.066 21:17:29 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:16.066 21:17:29 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:16.066 21:17:29 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:16.066 21:17:29 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:16.066 21:17:29 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:16.066 Fill FTL, iteration 1 00:26:16.066 21:17:29 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:16.066 21:17:29 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:16.066 21:17:29 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:16.066 21:17:29 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:16.066 21:17:29 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:16.066 21:17:29 -- ftl/common.sh@163 -- # spdk_ini_pid=78321 00:26:16.066 21:17:29 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:16.066 21:17:29 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:16.066 21:17:29 -- ftl/common.sh@165 -- # waitforlisten 78321 /var/tmp/spdk.tgt.sock 00:26:16.066 21:17:29 -- common/autotest_common.sh@819 -- # '[' -z 78321 ']' 00:26:16.066 21:17:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:16.066 21:17:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:16.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:16.067 21:17:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:16.067 21:17:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:16.067 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:26:16.067 [2024-07-13 21:17:29.939593] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:16.067 [2024-07-13 21:17:29.940332] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78321 ] 00:26:16.325 [2024-07-13 21:17:30.103023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.584 [2024-07-13 21:17:30.255129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:16.584 [2024-07-13 21:17:30.255570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.520 21:17:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:17.520 21:17:31 -- common/autotest_common.sh@852 -- # return 0 00:26:17.520 21:17:31 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:17.778 ftln1 00:26:18.037 21:17:31 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:18.037 21:17:31 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:18.037 21:17:31 -- ftl/common.sh@173 -- # echo ']}' 00:26:18.037 21:17:31 -- ftl/common.sh@176 -- # killprocess 78321 00:26:18.037 21:17:31 -- common/autotest_common.sh@926 -- # '[' -z 78321 ']' 00:26:18.037 21:17:31 -- common/autotest_common.sh@930 -- # kill -0 78321 00:26:18.037 21:17:31 -- common/autotest_common.sh@931 -- # uname 00:26:18.037 21:17:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:18.297 21:17:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78321 00:26:18.297 21:17:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:18.297 21:17:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:18.297 killing process with pid 78321 00:26:18.297 21:17:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78321' 00:26:18.297 21:17:31 -- common/autotest_common.sh@945 -- # kill 78321 00:26:18.297 21:17:31 -- common/autotest_common.sh@950 -- # wait 78321 00:26:20.201 21:17:33 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:20.201 21:17:33 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:20.201 [2024-07-13 21:17:33.717907] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:20.201 [2024-07-13 21:17:33.718055] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78379 ] 00:26:20.202 [2024-07-13 21:17:33.878512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.202 [2024-07-13 21:17:34.028315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.338  Copying: 217/1024 [MB] (217 MBps) Copying: 435/1024 [MB] (218 MBps) Copying: 653/1024 [MB] (218 MBps) Copying: 871/1024 [MB] (218 MBps) Copying: 1024/1024 [MB] (average 217 MBps) 00:26:26.338 00:26:26.338 Calculate MD5 checksum, iteration 1 00:26:26.338 21:17:39 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:26.338 21:17:39 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:26.338 21:17:39 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:26.338 21:17:39 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:26.338 21:17:39 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:26.339 21:17:39 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:26.339 21:17:39 -- ftl/common.sh@154 -- # return 0 00:26:26.339 21:17:39 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:26.339 [2024-07-13 21:17:40.060936] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:26.339 [2024-07-13 21:17:40.061120] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78446 ] 00:26:26.339 [2024-07-13 21:17:40.228931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.597 [2024-07-13 21:17:40.382195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.108  Copying: 477/1024 [MB] (477 MBps) Copying: 957/1024 [MB] (480 MBps) Copying: 1024/1024 [MB] (average 478 MBps) 00:26:30.108 00:26:30.108 21:17:43 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:30.108 21:17:43 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:32.011 21:17:45 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:32.011 Fill FTL, iteration 2 00:26:32.011 21:17:45 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5fcfd487b2c66c54b3df2161cd035dc2 00:26:32.011 21:17:45 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:32.011 21:17:45 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:32.011 21:17:45 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:32.011 21:17:45 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:32.011 21:17:45 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:32.011 21:17:45 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:32.012 21:17:45 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:32.012 21:17:45 -- ftl/common.sh@154 -- # return 0 00:26:32.012 21:17:45 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:32.012 [2024-07-13 21:17:45.604522] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:32.012 [2024-07-13 21:17:45.604648] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78513 ] 00:26:32.012 [2024-07-13 21:17:45.763078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.270 [2024-07-13 21:17:45.963202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.333  Copying: 214/1024 [MB] (214 MBps) Copying: 423/1024 [MB] (209 MBps) Copying: 630/1024 [MB] (207 MBps) Copying: 842/1024 [MB] (212 MBps) Copying: 1024/1024 [MB] (average 209 MBps) 00:26:38.333 00:26:38.333 Calculate MD5 checksum, iteration 2 00:26:38.333 21:17:52 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:38.333 21:17:52 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:38.333 21:17:52 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:38.333 21:17:52 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:38.333 21:17:52 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:38.333 21:17:52 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:38.333 21:17:52 -- ftl/common.sh@154 -- # return 0 00:26:38.333 21:17:52 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:38.333 [2024-07-13 21:17:52.212942] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:38.333 [2024-07-13 21:17:52.213132] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78578 ] 00:26:38.592 [2024-07-13 21:17:52.379033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.851 [2024-07-13 21:17:52.529363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.827  Copying: 472/1024 [MB] (472 MBps) Copying: 943/1024 [MB] (471 MBps) Copying: 1024/1024 [MB] (average 470 MBps) 00:26:42.827 00:26:42.827 21:17:56 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:42.827 21:17:56 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:44.728 21:17:58 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:44.728 21:17:58 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=67b0f8bee4d61c57b4f23b8bfb7c3123 00:26:44.728 21:17:58 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:44.728 21:17:58 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:44.728 21:17:58 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:44.728 [2024-07-13 21:17:58.600034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.728 [2024-07-13 21:17:58.600083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:44.728 [2024-07-13 21:17:58.600117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:44.728 [2024-07-13 21:17:58.600128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.728 [2024-07-13 21:17:58.600160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.728 [2024-07-13 21:17:58.600173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:44.728 [2024-07-13 21:17:58.600183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:44.728 [2024-07-13 21:17:58.600192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.728 [2024-07-13 21:17:58.600221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.728 [2024-07-13 21:17:58.600233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:44.728 [2024-07-13 21:17:58.600244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:44.728 [2024-07-13 21:17:58.600253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.728 [2024-07-13 21:17:58.600378] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.336 ms, result 0 00:26:44.728 true 00:26:44.728 21:17:58 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:44.987 { 00:26:44.987 "name": "ftl", 00:26:44.987 "properties": [ 00:26:44.987 { 00:26:44.987 "name": "superblock_version", 00:26:44.987 "value": 5, 00:26:44.987 "read-only": true 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "name": "base_device", 00:26:44.987 "bands": [ 00:26:44.987 { 00:26:44.987 "id": 0, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 1, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 2, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 3, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 4, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 5, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 6, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 7, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 8, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 9, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 10, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 11, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 12, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 13, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 14, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.987 }, 00:26:44.987 { 00:26:44.987 "id": 15, 00:26:44.987 "state": "FREE", 00:26:44.987 "validity": 0.0 00:26:44.988 }, 00:26:44.988 { 00:26:44.988 "id": 16, 00:26:44.988 "state": "FREE", 00:26:44.988 "validity": 0.0 00:26:44.988 }, 00:26:44.988 { 00:26:44.988 "id": 17, 00:26:44.988 "state": "FREE", 00:26:44.988 "validity": 0.0 00:26:44.988 } 00:26:44.988 ], 00:26:44.988 "read-only": true 00:26:44.988 }, 00:26:44.988 { 00:26:44.988 "name": "cache_device", 00:26:44.988 "type": "bdev", 00:26:44.988 "chunks": [ 00:26:44.988 { 00:26:44.988 "id": 0, 00:26:44.988 "state": "CLOSED", 00:26:44.988 "utilization": 1.0 00:26:44.988 }, 00:26:44.988 { 00:26:44.988 "id": 1, 00:26:44.988 "state": "CLOSED", 00:26:44.988 "utilization": 1.0 00:26:44.988 }, 00:26:44.988 { 00:26:44.988 "id": 2, 00:26:44.988 "state": "OPEN", 00:26:44.988 "utilization": 0.001953125 00:26:44.988 }, 00:26:44.988 { 00:26:44.988 "id": 3, 00:26:44.988 "state": "OPEN", 00:26:44.988 "utilization": 0.0 00:26:44.988 } 00:26:44.988 ], 00:26:44.988 "read-only": true 00:26:44.988 }, 00:26:44.988 { 00:26:44.988 "name": "verbose_mode", 00:26:44.988 "value": true, 00:26:44.988 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:44.988 }, 00:26:44.988 { 00:26:44.988 "name": "prep_upgrade_on_shutdown", 00:26:44.988 "value": false, 00:26:44.988 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:44.988 } 00:26:44.988 ] 00:26:44.988 } 00:26:44.988 21:17:58 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:45.246 [2024-07-13 21:17:59.024379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:45.246 [2024-07-13 21:17:59.024423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:45.246 [2024-07-13 21:17:59.024454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:45.246 [2024-07-13 21:17:59.024465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:45.246 [2024-07-13 21:17:59.024494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:45.246 [2024-07-13 21:17:59.024506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:45.246 [2024-07-13 21:17:59.024516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:45.246 [2024-07-13 21:17:59.024526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:45.246 [2024-07-13 21:17:59.024549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:45.246 [2024-07-13 21:17:59.024560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:45.246 [2024-07-13 21:17:59.024570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:45.246 [2024-07-13 21:17:59.024578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:45.246 [2024-07-13 21:17:59.024637] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.246 ms, result 0 00:26:45.246 true 00:26:45.246 21:17:59 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:45.246 21:17:59 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:45.246 21:17:59 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:45.505 21:17:59 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:45.505 21:17:59 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:45.505 21:17:59 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:45.764 [2024-07-13 21:17:59.500945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:45.764 [2024-07-13 21:17:59.500994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:45.764 [2024-07-13 21:17:59.501027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:45.764 [2024-07-13 21:17:59.501053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:45.764 [2024-07-13 21:17:59.501083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:45.764 [2024-07-13 21:17:59.501113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:45.764 [2024-07-13 21:17:59.501124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:45.764 [2024-07-13 21:17:59.501133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:45.764 [2024-07-13 21:17:59.501157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:45.764 [2024-07-13 21:17:59.501169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:45.764 [2024-07-13 21:17:59.501178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:45.764 [2024-07-13 21:17:59.501188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:45.764 [2024-07-13 21:17:59.501316] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.331 ms, result 0 00:26:45.764 true 00:26:45.764 21:17:59 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:46.025 { 00:26:46.025 "name": "ftl", 00:26:46.025 "properties": [ 00:26:46.025 { 00:26:46.025 "name": "superblock_version", 00:26:46.025 "value": 5, 00:26:46.025 "read-only": true 00:26:46.025 }, 00:26:46.025 { 00:26:46.025 "name": "base_device", 00:26:46.025 "bands": [ 00:26:46.025 { 00:26:46.025 "id": 0, 00:26:46.025 "state": "FREE", 00:26:46.025 "validity": 0.0 00:26:46.025 }, 00:26:46.025 { 00:26:46.025 "id": 1, 00:26:46.025 "state": "FREE", 00:26:46.025 "validity": 0.0 00:26:46.025 }, 00:26:46.025 { 00:26:46.025 "id": 2, 00:26:46.025 "state": "FREE", 00:26:46.025 "validity": 0.0 00:26:46.025 }, 00:26:46.025 { 00:26:46.025 "id": 3, 00:26:46.025 "state": "FREE", 00:26:46.025 "validity": 0.0 00:26:46.025 }, 00:26:46.025 { 00:26:46.025 "id": 4, 00:26:46.025 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 5, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 6, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 7, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 8, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 9, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 10, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 11, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 12, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 13, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 14, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 15, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 16, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 17, 00:26:46.026 "state": "FREE", 00:26:46.026 "validity": 0.0 00:26:46.026 } 00:26:46.026 ], 00:26:46.026 "read-only": true 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "name": "cache_device", 00:26:46.026 "type": "bdev", 00:26:46.026 "chunks": [ 00:26:46.026 { 00:26:46.026 "id": 0, 00:26:46.026 "state": "CLOSED", 00:26:46.026 "utilization": 1.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 1, 00:26:46.026 "state": "CLOSED", 00:26:46.026 "utilization": 1.0 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 2, 00:26:46.026 "state": "OPEN", 00:26:46.026 "utilization": 0.001953125 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "id": 3, 00:26:46.026 "state": "OPEN", 00:26:46.026 "utilization": 0.0 00:26:46.026 } 00:26:46.026 ], 00:26:46.026 "read-only": true 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "name": "verbose_mode", 00:26:46.026 "value": true, 00:26:46.026 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:46.026 }, 00:26:46.026 { 00:26:46.026 "name": "prep_upgrade_on_shutdown", 00:26:46.026 "value": true, 00:26:46.026 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:46.026 } 00:26:46.026 ] 00:26:46.026 } 00:26:46.026 21:17:59 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:46.026 21:17:59 -- ftl/common.sh@130 -- # [[ -n 78191 ]] 00:26:46.026 21:17:59 -- ftl/common.sh@131 -- # killprocess 78191 00:26:46.026 21:17:59 -- common/autotest_common.sh@926 -- # '[' -z 78191 ']' 00:26:46.026 21:17:59 -- common/autotest_common.sh@930 -- # kill -0 78191 00:26:46.026 21:17:59 -- common/autotest_common.sh@931 -- # uname 00:26:46.026 21:17:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:46.026 21:17:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78191 00:26:46.026 killing process with pid 78191 00:26:46.026 21:17:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:46.026 21:17:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:46.026 21:17:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78191' 00:26:46.026 21:17:59 -- common/autotest_common.sh@945 -- # kill 78191 00:26:46.026 21:17:59 -- common/autotest_common.sh@950 -- # wait 78191 00:26:46.594 [2024-07-13 21:18:00.483695] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:46.594 [2024-07-13 21:18:00.498349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.594 [2024-07-13 21:18:00.498410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:46.594 [2024-07-13 21:18:00.498444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:46.594 [2024-07-13 21:18:00.498454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.594 [2024-07-13 21:18:00.498488] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:46.594 [2024-07-13 21:18:00.501640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.594 [2024-07-13 21:18:00.501686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:46.594 [2024-07-13 21:18:00.501715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.133 ms 00:26:46.594 [2024-07-13 21:18:00.501725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.713 [2024-07-13 21:18:08.524630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.713 [2024-07-13 21:18:08.524721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:54.713 [2024-07-13 21:18:08.524756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8022.921 ms 00:26:54.713 [2024-07-13 21:18:08.524767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.713 [2024-07-13 21:18:08.525960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.713 [2024-07-13 21:18:08.525993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:54.713 [2024-07-13 21:18:08.526007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.172 ms 00:26:54.713 [2024-07-13 21:18:08.526018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.713 [2024-07-13 21:18:08.527171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.713 [2024-07-13 21:18:08.527199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:26:54.713 [2024-07-13 21:18:08.527227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.108 ms 00:26:54.713 [2024-07-13 21:18:08.527236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.713 [2024-07-13 21:18:08.537484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.713 [2024-07-13 21:18:08.537519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:54.713 [2024-07-13 21:18:08.537549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.188 ms 00:26:54.713 [2024-07-13 21:18:08.537559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.713 [2024-07-13 21:18:08.544175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.713 [2024-07-13 21:18:08.544211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:54.713 [2024-07-13 21:18:08.544240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.579 ms 00:26:54.713 [2024-07-13 21:18:08.544256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.713 [2024-07-13 21:18:08.544342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.713 [2024-07-13 21:18:08.544360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:54.713 [2024-07-13 21:18:08.544371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:26:54.713 [2024-07-13 21:18:08.544380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.713 [2024-07-13 21:18:08.554486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.713 [2024-07-13 21:18:08.554518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:54.713 [2024-07-13 21:18:08.554547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.088 ms 00:26:54.713 [2024-07-13 21:18:08.554556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.713 [2024-07-13 21:18:08.564563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.713 [2024-07-13 21:18:08.564596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:54.713 [2024-07-13 21:18:08.564623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.972 ms 00:26:54.713 [2024-07-13 21:18:08.564633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.713 [2024-07-13 21:18:08.574470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.713 [2024-07-13 21:18:08.574501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:54.713 [2024-07-13 21:18:08.574528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.803 ms 00:26:54.713 [2024-07-13 21:18:08.574537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.713 [2024-07-13 21:18:08.584257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.713 [2024-07-13 21:18:08.584288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:54.713 [2024-07-13 21:18:08.584316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.657 ms 00:26:54.713 [2024-07-13 21:18:08.584325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.713 [2024-07-13 21:18:08.584358] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:54.713 [2024-07-13 21:18:08.584378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:54.713 [2024-07-13 21:18:08.584391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:54.713 [2024-07-13 21:18:08.584402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:54.713 [2024-07-13 21:18:08.584412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:54.713 [2024-07-13 21:18:08.584566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:54.714 [2024-07-13 21:18:08.584576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:54.714 [2024-07-13 21:18:08.584604] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:54.714 [2024-07-13 21:18:08.584614] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5ad24337-5379-4549-8ed3-22af24313bce 00:26:54.714 [2024-07-13 21:18:08.584640] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:54.714 [2024-07-13 21:18:08.584654] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:54.714 [2024-07-13 21:18:08.584664] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:54.714 [2024-07-13 21:18:08.584674] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:54.714 [2024-07-13 21:18:08.584683] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:54.714 [2024-07-13 21:18:08.584693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:54.714 [2024-07-13 21:18:08.584730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:54.714 [2024-07-13 21:18:08.584739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:54.714 [2024-07-13 21:18:08.584749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:54.714 [2024-07-13 21:18:08.584759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.714 [2024-07-13 21:18:08.584769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:54.714 [2024-07-13 21:18:08.584783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:26:54.714 [2024-07-13 21:18:08.584793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.714 [2024-07-13 21:18:08.597873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.714 [2024-07-13 21:18:08.597905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:54.714 [2024-07-13 21:18:08.597935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.058 ms 00:26:54.714 [2024-07-13 21:18:08.597945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.714 [2024-07-13 21:18:08.598136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.714 [2024-07-13 21:18:08.598182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:54.714 [2024-07-13 21:18:08.598207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.168 ms 00:26:54.714 [2024-07-13 21:18:08.598217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.973 [2024-07-13 21:18:08.643781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:54.973 [2024-07-13 21:18:08.643820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:54.973 [2024-07-13 21:18:08.643861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:54.973 [2024-07-13 21:18:08.643875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.973 [2024-07-13 21:18:08.643910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:54.973 [2024-07-13 21:18:08.643922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:54.973 [2024-07-13 21:18:08.643932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:54.973 [2024-07-13 21:18:08.643941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.973 [2024-07-13 21:18:08.644025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:54.973 [2024-07-13 21:18:08.644042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:54.973 [2024-07-13 21:18:08.644053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:54.973 [2024-07-13 21:18:08.644062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.973 [2024-07-13 21:18:08.644129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:54.973 [2024-07-13 21:18:08.644141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:54.973 [2024-07-13 21:18:08.644152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:54.973 [2024-07-13 21:18:08.644161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.973 [2024-07-13 21:18:08.721807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:54.973 [2024-07-13 21:18:08.721894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:54.973 [2024-07-13 21:18:08.721927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:54.974 [2024-07-13 21:18:08.721937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.974 [2024-07-13 21:18:08.755535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:54.974 [2024-07-13 21:18:08.755568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:54.974 [2024-07-13 21:18:08.755598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:54.974 [2024-07-13 21:18:08.755609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.974 [2024-07-13 21:18:08.755689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:54.974 [2024-07-13 21:18:08.755712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:54.974 [2024-07-13 21:18:08.755722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:54.974 [2024-07-13 21:18:08.755732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.974 [2024-07-13 21:18:08.755781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:54.974 [2024-07-13 21:18:08.755795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:54.974 [2024-07-13 21:18:08.755820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:54.974 [2024-07-13 21:18:08.755887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.974 [2024-07-13 21:18:08.756004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:54.974 [2024-07-13 21:18:08.756028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:54.974 [2024-07-13 21:18:08.756040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:54.974 [2024-07-13 21:18:08.756050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.974 [2024-07-13 21:18:08.756103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:54.974 [2024-07-13 21:18:08.756125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:54.974 [2024-07-13 21:18:08.756138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:54.974 [2024-07-13 21:18:08.756149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.974 [2024-07-13 21:18:08.756200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:54.974 [2024-07-13 21:18:08.756229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:54.974 [2024-07-13 21:18:08.756246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:54.974 [2024-07-13 21:18:08.756256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.974 [2024-07-13 21:18:08.756314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:54.974 [2024-07-13 21:18:08.756330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:54.974 [2024-07-13 21:18:08.756341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:54.974 [2024-07-13 21:18:08.756351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.974 [2024-07-13 21:18:08.756482] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8258.146 ms, result 0 00:26:58.263 21:18:11 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:58.263 21:18:11 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:58.263 21:18:11 -- ftl/common.sh@81 -- # local base_bdev= 00:26:58.263 21:18:11 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:58.263 21:18:11 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:58.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.263 21:18:11 -- ftl/common.sh@89 -- # spdk_tgt_pid=78794 00:26:58.263 21:18:11 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:58.263 21:18:11 -- ftl/common.sh@91 -- # waitforlisten 78794 00:26:58.263 21:18:11 -- common/autotest_common.sh@819 -- # '[' -z 78794 ']' 00:26:58.263 21:18:11 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:58.263 21:18:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.263 21:18:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:58.263 21:18:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.263 21:18:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:58.263 21:18:11 -- common/autotest_common.sh@10 -- # set +x 00:26:58.263 [2024-07-13 21:18:12.089221] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:58.263 [2024-07-13 21:18:12.089378] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78794 ] 00:26:58.522 [2024-07-13 21:18:12.259047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.522 [2024-07-13 21:18:12.400118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:58.522 [2024-07-13 21:18:12.400332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.472 [2024-07-13 21:18:13.048761] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:59.472 [2024-07-13 21:18:13.048842] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:59.472 [2024-07-13 21:18:13.187637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.472 [2024-07-13 21:18:13.187696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:59.472 [2024-07-13 21:18:13.187729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:59.472 [2024-07-13 21:18:13.187740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.472 [2024-07-13 21:18:13.187808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.472 [2024-07-13 21:18:13.187833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:59.472 [2024-07-13 21:18:13.187860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:26:59.472 [2024-07-13 21:18:13.187889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.472 [2024-07-13 21:18:13.187938] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:59.472 [2024-07-13 21:18:13.188880] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:59.472 [2024-07-13 21:18:13.188920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.472 [2024-07-13 21:18:13.188937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:59.472 [2024-07-13 21:18:13.188948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.004 ms 00:26:59.472 [2024-07-13 21:18:13.188958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.472 [2024-07-13 21:18:13.190185] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:59.472 [2024-07-13 21:18:13.203593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.472 [2024-07-13 21:18:13.203629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:59.472 [2024-07-13 21:18:13.203660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.409 ms 00:26:59.472 [2024-07-13 21:18:13.203669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.472 [2024-07-13 21:18:13.203732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.472 [2024-07-13 21:18:13.203750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:59.472 [2024-07-13 21:18:13.203764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:59.472 [2024-07-13 21:18:13.203773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.472 [2024-07-13 21:18:13.207852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.472 [2024-07-13 21:18:13.207885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:59.472 [2024-07-13 21:18:13.207915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.934 ms 00:26:59.472 [2024-07-13 21:18:13.207924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.472 [2024-07-13 21:18:13.207973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.472 [2024-07-13 21:18:13.207989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:59.472 [2024-07-13 21:18:13.208006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:59.472 [2024-07-13 21:18:13.208015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.472 [2024-07-13 21:18:13.208064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.472 [2024-07-13 21:18:13.208079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:59.472 [2024-07-13 21:18:13.208090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:59.472 [2024-07-13 21:18:13.208098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.472 [2024-07-13 21:18:13.208180] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:59.472 [2024-07-13 21:18:13.211581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.472 [2024-07-13 21:18:13.211611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:59.472 [2024-07-13 21:18:13.211639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.413 ms 00:26:59.472 [2024-07-13 21:18:13.211648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.472 [2024-07-13 21:18:13.211684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.472 [2024-07-13 21:18:13.211698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:59.472 [2024-07-13 21:18:13.211708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:59.473 [2024-07-13 21:18:13.211717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.473 [2024-07-13 21:18:13.211744] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:59.473 [2024-07-13 21:18:13.211778] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:59.473 [2024-07-13 21:18:13.211846] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:59.473 [2024-07-13 21:18:13.211901] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:59.473 [2024-07-13 21:18:13.211979] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:59.473 [2024-07-13 21:18:13.211993] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:59.473 [2024-07-13 21:18:13.212006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:59.473 [2024-07-13 21:18:13.212019] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:59.473 [2024-07-13 21:18:13.212030] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:59.473 [2024-07-13 21:18:13.212041] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:59.473 [2024-07-13 21:18:13.212050] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:59.473 [2024-07-13 21:18:13.212059] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:59.473 [2024-07-13 21:18:13.212072] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:59.473 [2024-07-13 21:18:13.212083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.473 [2024-07-13 21:18:13.212096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:59.473 [2024-07-13 21:18:13.212122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.342 ms 00:26:59.473 [2024-07-13 21:18:13.212132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.473 [2024-07-13 21:18:13.212202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.473 [2024-07-13 21:18:13.212217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:59.473 [2024-07-13 21:18:13.212228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:26:59.473 [2024-07-13 21:18:13.212237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.473 [2024-07-13 21:18:13.212340] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:59.473 [2024-07-13 21:18:13.212368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:59.473 [2024-07-13 21:18:13.212387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:59.473 [2024-07-13 21:18:13.212398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:59.473 [2024-07-13 21:18:13.212408] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:59.473 [2024-07-13 21:18:13.212417] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:59.473 [2024-07-13 21:18:13.212427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:59.473 [2024-07-13 21:18:13.212436] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:59.473 [2024-07-13 21:18:13.212445] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:59.473 [2024-07-13 21:18:13.212455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:59.473 [2024-07-13 21:18:13.212464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:59.473 [2024-07-13 21:18:13.212473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:59.473 [2024-07-13 21:18:13.212482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:59.473 [2024-07-13 21:18:13.212492] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:59.473 [2024-07-13 21:18:13.212516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:59.473 [2024-07-13 21:18:13.212541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:59.473 [2024-07-13 21:18:13.212550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:59.473 [2024-07-13 21:18:13.212559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:59.473 [2024-07-13 21:18:13.212568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:59.473 [2024-07-13 21:18:13.212577] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:59.473 [2024-07-13 21:18:13.212586] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:59.473 [2024-07-13 21:18:13.212595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:59.473 [2024-07-13 21:18:13.212604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:59.473 [2024-07-13 21:18:13.212613] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:59.473 [2024-07-13 21:18:13.212637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:59.473 [2024-07-13 21:18:13.212646] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:59.473 [2024-07-13 21:18:13.212654] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:59.473 [2024-07-13 21:18:13.212663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:59.473 [2024-07-13 21:18:13.212672] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:59.474 [2024-07-13 21:18:13.212680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:59.474 [2024-07-13 21:18:13.212689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:59.474 [2024-07-13 21:18:13.212708] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:59.474 [2024-07-13 21:18:13.212735] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:59.474 [2024-07-13 21:18:13.212744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:59.474 [2024-07-13 21:18:13.212753] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:59.474 [2024-07-13 21:18:13.212762] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:59.474 [2024-07-13 21:18:13.212771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:59.474 [2024-07-13 21:18:13.212780] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:59.474 [2024-07-13 21:18:13.212789] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:59.474 [2024-07-13 21:18:13.212798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:59.474 [2024-07-13 21:18:13.212807] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:59.474 [2024-07-13 21:18:13.212817] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:59.474 [2024-07-13 21:18:13.212826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:59.474 [2024-07-13 21:18:13.212836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:59.474 [2024-07-13 21:18:13.212846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:59.474 [2024-07-13 21:18:13.212870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:59.474 [2024-07-13 21:18:13.212881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:59.474 [2024-07-13 21:18:13.212891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:59.474 [2024-07-13 21:18:13.212900] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:59.474 [2024-07-13 21:18:13.212910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:59.474 [2024-07-13 21:18:13.212920] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:59.474 [2024-07-13 21:18:13.212932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:59.474 [2024-07-13 21:18:13.212944] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:59.474 [2024-07-13 21:18:13.212954] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:59.474 [2024-07-13 21:18:13.212964] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:59.474 [2024-07-13 21:18:13.212973] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:59.474 [2024-07-13 21:18:13.212983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:59.474 [2024-07-13 21:18:13.212994] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:59.474 [2024-07-13 21:18:13.213003] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:59.474 [2024-07-13 21:18:13.213013] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:59.474 [2024-07-13 21:18:13.213023] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:59.474 [2024-07-13 21:18:13.213033] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:59.474 [2024-07-13 21:18:13.213055] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:59.474 [2024-07-13 21:18:13.213066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:59.474 [2024-07-13 21:18:13.213091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:59.474 [2024-07-13 21:18:13.213101] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:59.474 [2024-07-13 21:18:13.213112] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:59.474 [2024-07-13 21:18:13.213127] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:59.474 [2024-07-13 21:18:13.213137] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:59.474 [2024-07-13 21:18:13.213146] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:59.474 [2024-07-13 21:18:13.213156] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:59.474 [2024-07-13 21:18:13.213168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.474 [2024-07-13 21:18:13.213178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:59.474 [2024-07-13 21:18:13.213188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.869 ms 00:26:59.474 [2024-07-13 21:18:13.213197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.474 [2024-07-13 21:18:13.228051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.228104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:59.475 [2024-07-13 21:18:13.228141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.796 ms 00:26:59.475 [2024-07-13 21:18:13.228152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.475 [2024-07-13 21:18:13.228197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.228211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:59.475 [2024-07-13 21:18:13.228222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:59.475 [2024-07-13 21:18:13.228231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.475 [2024-07-13 21:18:13.258774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.258832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:59.475 [2024-07-13 21:18:13.258877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.478 ms 00:26:59.475 [2024-07-13 21:18:13.258887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.475 [2024-07-13 21:18:13.258938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.258954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:59.475 [2024-07-13 21:18:13.258965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:59.475 [2024-07-13 21:18:13.258974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.475 [2024-07-13 21:18:13.259385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.259417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:59.475 [2024-07-13 21:18:13.259431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:26:59.475 [2024-07-13 21:18:13.259447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.475 [2024-07-13 21:18:13.259499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.259514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:59.475 [2024-07-13 21:18:13.259525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:59.475 [2024-07-13 21:18:13.259534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.475 [2024-07-13 21:18:13.273972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.274007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:59.475 [2024-07-13 21:18:13.274039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.411 ms 00:26:59.475 [2024-07-13 21:18:13.274048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.475 [2024-07-13 21:18:13.286981] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:59.475 [2024-07-13 21:18:13.287018] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:59.475 [2024-07-13 21:18:13.287049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.287059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:59.475 [2024-07-13 21:18:13.287070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.881 ms 00:26:59.475 [2024-07-13 21:18:13.287080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.475 [2024-07-13 21:18:13.301080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.301146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:59.475 [2024-07-13 21:18:13.301190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.957 ms 00:26:59.475 [2024-07-13 21:18:13.301200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.475 [2024-07-13 21:18:13.313281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.313314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:59.475 [2024-07-13 21:18:13.313343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.035 ms 00:26:59.475 [2024-07-13 21:18:13.313353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.475 [2024-07-13 21:18:13.325403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.325436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:59.475 [2024-07-13 21:18:13.325464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.009 ms 00:26:59.475 [2024-07-13 21:18:13.325473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.475 [2024-07-13 21:18:13.325938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.325971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:59.475 [2024-07-13 21:18:13.325985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.360 ms 00:26:59.475 [2024-07-13 21:18:13.325995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.475 [2024-07-13 21:18:13.385033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.475 [2024-07-13 21:18:13.385090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:59.475 [2024-07-13 21:18:13.385122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 59.012 ms 00:26:59.475 [2024-07-13 21:18:13.385132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.739 [2024-07-13 21:18:13.395714] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:59.739 [2024-07-13 21:18:13.396401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.739 [2024-07-13 21:18:13.396462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:59.739 [2024-07-13 21:18:13.396493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.208 ms 00:26:59.739 [2024-07-13 21:18:13.396502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.739 [2024-07-13 21:18:13.396585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.739 [2024-07-13 21:18:13.396619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:59.739 [2024-07-13 21:18:13.396646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:59.739 [2024-07-13 21:18:13.396656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.739 [2024-07-13 21:18:13.396797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.739 [2024-07-13 21:18:13.396816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:59.739 [2024-07-13 21:18:13.396830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:59.739 [2024-07-13 21:18:13.396856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.739 [2024-07-13 21:18:13.398515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.739 [2024-07-13 21:18:13.398547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:59.739 [2024-07-13 21:18:13.398579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.628 ms 00:26:59.739 [2024-07-13 21:18:13.398588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.739 [2024-07-13 21:18:13.398623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.739 [2024-07-13 21:18:13.398636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:59.739 [2024-07-13 21:18:13.398646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:59.739 [2024-07-13 21:18:13.398655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.739 [2024-07-13 21:18:13.398695] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:59.739 [2024-07-13 21:18:13.398710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.739 [2024-07-13 21:18:13.398719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:59.739 [2024-07-13 21:18:13.398729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:59.739 [2024-07-13 21:18:13.398741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.739 [2024-07-13 21:18:13.423293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.739 [2024-07-13 21:18:13.423329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:59.739 [2024-07-13 21:18:13.423359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.513 ms 00:26:59.739 [2024-07-13 21:18:13.423369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.739 [2024-07-13 21:18:13.423442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.739 [2024-07-13 21:18:13.423459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:59.739 [2024-07-13 21:18:13.423477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:59.739 [2024-07-13 21:18:13.423486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.739 [2024-07-13 21:18:13.424967] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 236.669 ms, result 0 00:26:59.739 [2024-07-13 21:18:13.439634] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.739 [2024-07-13 21:18:13.455640] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:59.739 [2024-07-13 21:18:13.463745] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:59.998 21:18:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:59.998 21:18:13 -- common/autotest_common.sh@852 -- # return 0 00:26:59.998 21:18:13 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:59.998 21:18:13 -- ftl/common.sh@95 -- # return 0 00:26:59.998 21:18:13 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:59.998 [2024-07-13 21:18:13.844818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.998 [2024-07-13 21:18:13.844902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:59.998 [2024-07-13 21:18:13.844938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:59.998 [2024-07-13 21:18:13.844949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.998 [2024-07-13 21:18:13.844984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.998 [2024-07-13 21:18:13.845000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:59.998 [2024-07-13 21:18:13.845011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:59.998 [2024-07-13 21:18:13.845021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.998 [2024-07-13 21:18:13.845061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.998 [2024-07-13 21:18:13.845074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:59.998 [2024-07-13 21:18:13.845085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:59.998 [2024-07-13 21:18:13.845116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.998 [2024-07-13 21:18:13.845251] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.372 ms, result 0 00:26:59.998 true 00:26:59.998 21:18:13 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:00.257 { 00:27:00.257 "name": "ftl", 00:27:00.257 "properties": [ 00:27:00.257 { 00:27:00.257 "name": "superblock_version", 00:27:00.257 "value": 5, 00:27:00.257 "read-only": true 00:27:00.257 }, 00:27:00.257 { 00:27:00.258 "name": "base_device", 00:27:00.258 "bands": [ 00:27:00.258 { 00:27:00.258 "id": 0, 00:27:00.258 "state": "CLOSED", 00:27:00.258 "validity": 1.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 1, 00:27:00.258 "state": "CLOSED", 00:27:00.258 "validity": 1.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 2, 00:27:00.258 "state": "CLOSED", 00:27:00.258 "validity": 0.007843137254901933 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 3, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 4, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 5, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 6, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 7, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 8, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 9, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 10, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 11, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 12, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 13, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 14, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 15, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 16, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 17, 00:27:00.258 "state": "FREE", 00:27:00.258 "validity": 0.0 00:27:00.258 } 00:27:00.258 ], 00:27:00.258 "read-only": true 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "name": "cache_device", 00:27:00.258 "type": "bdev", 00:27:00.258 "chunks": [ 00:27:00.258 { 00:27:00.258 "id": 0, 00:27:00.258 "state": "OPEN", 00:27:00.258 "utilization": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 1, 00:27:00.258 "state": "OPEN", 00:27:00.258 "utilization": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 2, 00:27:00.258 "state": "FREE", 00:27:00.258 "utilization": 0.0 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "id": 3, 00:27:00.258 "state": "FREE", 00:27:00.258 "utilization": 0.0 00:27:00.258 } 00:27:00.258 ], 00:27:00.258 "read-only": true 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "name": "verbose_mode", 00:27:00.258 "value": true, 00:27:00.258 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:00.258 }, 00:27:00.258 { 00:27:00.258 "name": "prep_upgrade_on_shutdown", 00:27:00.258 "value": false, 00:27:00.258 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:00.258 } 00:27:00.258 ] 00:27:00.258 } 00:27:00.258 21:18:14 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:00.258 21:18:14 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:00.258 21:18:14 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:00.517 21:18:14 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:00.517 21:18:14 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:00.517 21:18:14 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:00.517 21:18:14 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:00.517 21:18:14 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:00.776 Validate MD5 checksum, iteration 1 00:27:00.776 21:18:14 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:00.776 21:18:14 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:00.776 21:18:14 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:00.776 21:18:14 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:00.776 21:18:14 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:00.776 21:18:14 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:00.776 21:18:14 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:00.776 21:18:14 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:00.776 21:18:14 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:00.776 21:18:14 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:00.776 21:18:14 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:00.776 21:18:14 -- ftl/common.sh@154 -- # return 0 00:27:00.776 21:18:14 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:00.776 [2024-07-13 21:18:14.637612] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:00.776 [2024-07-13 21:18:14.637784] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78833 ] 00:27:01.035 [2024-07-13 21:18:14.803539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.035 [2024-07-13 21:18:14.952325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.249  Copying: 523/1024 [MB] (523 MBps) Copying: 1022/1024 [MB] (499 MBps) Copying: 1024/1024 [MB] (average 510 MBps) 00:27:05.249 00:27:05.249 21:18:18 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:05.249 21:18:18 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:07.151 21:18:20 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:07.151 Validate MD5 checksum, iteration 2 00:27:07.151 21:18:20 -- ftl/upgrade_shutdown.sh@103 -- # sum=5fcfd487b2c66c54b3df2161cd035dc2 00:27:07.151 21:18:20 -- ftl/upgrade_shutdown.sh@105 -- # [[ 5fcfd487b2c66c54b3df2161cd035dc2 != \5\f\c\f\d\4\8\7\b\2\c\6\6\c\5\4\b\3\d\f\2\1\6\1\c\d\0\3\5\d\c\2 ]] 00:27:07.151 21:18:20 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:07.151 21:18:20 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:07.151 21:18:20 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:07.151 21:18:20 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:07.151 21:18:20 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:07.151 21:18:20 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:07.151 21:18:20 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:07.151 21:18:20 -- ftl/common.sh@154 -- # return 0 00:27:07.151 21:18:20 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:07.151 [2024-07-13 21:18:20.657712] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:07.151 [2024-07-13 21:18:20.657897] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78899 ] 00:27:07.151 [2024-07-13 21:18:20.828561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.151 [2024-07-13 21:18:21.021558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.523  Copying: 521/1024 [MB] (521 MBps) Copying: 1024/1024 [MB] (average 515 MBps) 00:27:11.523 00:27:11.524 21:18:25 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:11.524 21:18:25 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:12.902 21:18:26 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:12.902 21:18:26 -- ftl/upgrade_shutdown.sh@103 -- # sum=67b0f8bee4d61c57b4f23b8bfb7c3123 00:27:12.902 21:18:26 -- ftl/upgrade_shutdown.sh@105 -- # [[ 67b0f8bee4d61c57b4f23b8bfb7c3123 != \6\7\b\0\f\8\b\e\e\4\d\6\1\c\5\7\b\4\f\2\3\b\8\b\f\b\7\c\3\1\2\3 ]] 00:27:12.902 21:18:26 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:12.902 21:18:26 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:12.902 21:18:26 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:12.902 21:18:26 -- ftl/common.sh@137 -- # [[ -n 78794 ]] 00:27:12.902 21:18:26 -- ftl/common.sh@138 -- # kill -9 78794 00:27:12.902 21:18:26 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:12.902 21:18:26 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:12.902 21:18:26 -- ftl/common.sh@81 -- # local base_bdev= 00:27:12.902 21:18:26 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:12.902 21:18:26 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:12.902 21:18:26 -- ftl/common.sh@89 -- # spdk_tgt_pid=78962 00:27:12.902 21:18:26 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:12.902 21:18:26 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:12.902 21:18:26 -- ftl/common.sh@91 -- # waitforlisten 78962 00:27:12.902 21:18:26 -- common/autotest_common.sh@819 -- # '[' -z 78962 ']' 00:27:12.902 21:18:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.902 21:18:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:12.902 21:18:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.902 21:18:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:12.902 21:18:26 -- common/autotest_common.sh@10 -- # set +x 00:27:13.162 [2024-07-13 21:18:26.910231] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:13.162 [2024-07-13 21:18:26.910378] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78962 ] 00:27:13.162 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 818: 78794 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:13.162 [2024-07-13 21:18:27.077329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.421 [2024-07-13 21:18:27.218310] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:13.421 [2024-07-13 21:18:27.218499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.989 [2024-07-13 21:18:27.850806] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:13.989 [2024-07-13 21:18:27.850878] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:14.249 [2024-07-13 21:18:27.988922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.249 [2024-07-13 21:18:27.988960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:14.249 [2024-07-13 21:18:27.988976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:14.249 [2024-07-13 21:18:27.988986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.249 [2024-07-13 21:18:27.989047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.249 [2024-07-13 21:18:27.989070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:14.249 [2024-07-13 21:18:27.989081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:14.249 [2024-07-13 21:18:27.989093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.249 [2024-07-13 21:18:27.989121] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:14.249 [2024-07-13 21:18:27.989824] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:14.249 [2024-07-13 21:18:27.989846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.249 [2024-07-13 21:18:27.989894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:14.249 [2024-07-13 21:18:27.989905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.731 ms 00:27:14.249 [2024-07-13 21:18:27.989915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.249 [2024-07-13 21:18:27.990330] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:14.249 [2024-07-13 21:18:28.006316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.249 [2024-07-13 21:18:28.006352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:14.249 [2024-07-13 21:18:28.006366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.988 ms 00:27:14.249 [2024-07-13 21:18:28.006375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.249 [2024-07-13 21:18:28.015438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.249 [2024-07-13 21:18:28.015468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:14.249 [2024-07-13 21:18:28.015480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:14.249 [2024-07-13 21:18:28.015489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.249 [2024-07-13 21:18:28.015903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.249 [2024-07-13 21:18:28.015921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:14.249 [2024-07-13 21:18:28.015933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.335 ms 00:27:14.249 [2024-07-13 21:18:28.015942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.249 [2024-07-13 21:18:28.015982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.249 [2024-07-13 21:18:28.015996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:14.249 [2024-07-13 21:18:28.016006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:14.249 [2024-07-13 21:18:28.016015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.249 [2024-07-13 21:18:28.016047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.249 [2024-07-13 21:18:28.016059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:14.249 [2024-07-13 21:18:28.016084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:14.249 [2024-07-13 21:18:28.016108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.249 [2024-07-13 21:18:28.016135] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:14.249 [2024-07-13 21:18:28.019229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.249 [2024-07-13 21:18:28.019256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:14.249 [2024-07-13 21:18:28.019268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.102 ms 00:27:14.249 [2024-07-13 21:18:28.019277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.249 [2024-07-13 21:18:28.019310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.249 [2024-07-13 21:18:28.019323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:14.249 [2024-07-13 21:18:28.019333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:14.249 [2024-07-13 21:18:28.019342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.249 [2024-07-13 21:18:28.019368] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:14.249 [2024-07-13 21:18:28.019388] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:14.249 [2024-07-13 21:18:28.019419] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:14.249 [2024-07-13 21:18:28.019441] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:14.249 [2024-07-13 21:18:28.019507] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:14.249 [2024-07-13 21:18:28.019520] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:14.249 [2024-07-13 21:18:28.019531] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:14.249 [2024-07-13 21:18:28.019549] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:14.249 [2024-07-13 21:18:28.019559] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:14.249 [2024-07-13 21:18:28.019568] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:14.249 [2024-07-13 21:18:28.019577] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:14.249 [2024-07-13 21:18:28.019585] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:14.249 [2024-07-13 21:18:28.019594] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:14.249 [2024-07-13 21:18:28.019602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.249 [2024-07-13 21:18:28.019611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:14.249 [2024-07-13 21:18:28.019620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.236 ms 00:27:14.249 [2024-07-13 21:18:28.019628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.249 [2024-07-13 21:18:28.019685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.249 [2024-07-13 21:18:28.019699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:14.249 [2024-07-13 21:18:28.019708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:14.249 [2024-07-13 21:18:28.019716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.249 [2024-07-13 21:18:28.019799] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:14.249 [2024-07-13 21:18:28.019813] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:14.249 [2024-07-13 21:18:28.019823] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:14.249 [2024-07-13 21:18:28.019832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.249 [2024-07-13 21:18:28.019857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:14.249 [2024-07-13 21:18:28.019866] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:14.249 [2024-07-13 21:18:28.019874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:14.249 [2024-07-13 21:18:28.019882] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:14.249 [2024-07-13 21:18:28.019891] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:14.249 [2024-07-13 21:18:28.019901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.249 [2024-07-13 21:18:28.019911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:14.249 [2024-07-13 21:18:28.019919] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:14.249 [2024-07-13 21:18:28.019927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.249 [2024-07-13 21:18:28.019935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:14.249 [2024-07-13 21:18:28.019943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:14.249 [2024-07-13 21:18:28.019951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.249 [2024-07-13 21:18:28.019959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:14.249 [2024-07-13 21:18:28.019967] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:14.249 [2024-07-13 21:18:28.019975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.249 [2024-07-13 21:18:28.019983] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:14.249 [2024-07-13 21:18:28.019991] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:14.250 [2024-07-13 21:18:28.019999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:14.250 [2024-07-13 21:18:28.020007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:14.250 [2024-07-13 21:18:28.020014] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:14.250 [2024-07-13 21:18:28.020022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:14.250 [2024-07-13 21:18:28.020030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:14.250 [2024-07-13 21:18:28.020037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:14.250 [2024-07-13 21:18:28.020045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:14.250 [2024-07-13 21:18:28.020053] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:14.250 [2024-07-13 21:18:28.020061] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:14.250 [2024-07-13 21:18:28.020084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:14.250 [2024-07-13 21:18:28.020108] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:14.250 [2024-07-13 21:18:28.020117] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:14.250 [2024-07-13 21:18:28.020125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:14.250 [2024-07-13 21:18:28.020133] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:14.250 [2024-07-13 21:18:28.020157] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:14.250 [2024-07-13 21:18:28.020165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.250 [2024-07-13 21:18:28.020174] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:14.250 [2024-07-13 21:18:28.020182] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:14.250 [2024-07-13 21:18:28.020191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.250 [2024-07-13 21:18:28.020199] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:14.250 [2024-07-13 21:18:28.020210] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:14.250 [2024-07-13 21:18:28.020224] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:14.250 [2024-07-13 21:18:28.020234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:14.250 [2024-07-13 21:18:28.020243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:14.250 [2024-07-13 21:18:28.020253] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:14.250 [2024-07-13 21:18:28.020261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:14.250 [2024-07-13 21:18:28.020270] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:14.250 [2024-07-13 21:18:28.020279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:14.250 [2024-07-13 21:18:28.020288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:14.250 [2024-07-13 21:18:28.020298] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:14.250 [2024-07-13 21:18:28.020310] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.250 [2024-07-13 21:18:28.020320] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:14.250 [2024-07-13 21:18:28.020329] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:14.250 [2024-07-13 21:18:28.020339] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:14.250 [2024-07-13 21:18:28.020348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:14.250 [2024-07-13 21:18:28.020357] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:14.250 [2024-07-13 21:18:28.020367] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:14.250 [2024-07-13 21:18:28.020376] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:14.250 [2024-07-13 21:18:28.020385] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:14.250 [2024-07-13 21:18:28.020404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:14.250 [2024-07-13 21:18:28.020414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:14.250 [2024-07-13 21:18:28.020424] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:14.250 [2024-07-13 21:18:28.020433] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:14.250 [2024-07-13 21:18:28.020443] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:14.250 [2024-07-13 21:18:28.020453] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:14.250 [2024-07-13 21:18:28.020463] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.250 [2024-07-13 21:18:28.020474] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:14.250 [2024-07-13 21:18:28.020483] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:14.250 [2024-07-13 21:18:28.020493] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:14.250 [2024-07-13 21:18:28.020502] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:14.250 [2024-07-13 21:18:28.020513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.250 [2024-07-13 21:18:28.020523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:14.250 [2024-07-13 21:18:28.020533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.748 ms 00:27:14.250 [2024-07-13 21:18:28.020544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.250 [2024-07-13 21:18:28.035823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.250 [2024-07-13 21:18:28.035867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:14.250 [2024-07-13 21:18:28.035883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.225 ms 00:27:14.250 [2024-07-13 21:18:28.035892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.250 [2024-07-13 21:18:28.035933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.250 [2024-07-13 21:18:28.035950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:14.250 [2024-07-13 21:18:28.035960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:14.250 [2024-07-13 21:18:28.035969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.250 [2024-07-13 21:18:28.071050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.250 [2024-07-13 21:18:28.071093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:14.250 [2024-07-13 21:18:28.071112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 35.024 ms 00:27:14.250 [2024-07-13 21:18:28.071123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.250 [2024-07-13 21:18:28.071184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.250 [2024-07-13 21:18:28.071199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:14.250 [2024-07-13 21:18:28.071216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:14.250 [2024-07-13 21:18:28.071241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.250 [2024-07-13 21:18:28.071383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.250 [2024-07-13 21:18:28.071400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:14.250 [2024-07-13 21:18:28.071411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:27:14.250 [2024-07-13 21:18:28.071421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.250 [2024-07-13 21:18:28.071472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.250 [2024-07-13 21:18:28.071485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:14.250 [2024-07-13 21:18:28.071496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:14.250 [2024-07-13 21:18:28.071510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.250 [2024-07-13 21:18:28.088739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.250 [2024-07-13 21:18:28.088781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:14.250 [2024-07-13 21:18:28.088799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.172 ms 00:27:14.250 [2024-07-13 21:18:28.088816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.250 [2024-07-13 21:18:28.088967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.250 [2024-07-13 21:18:28.088989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:14.250 [2024-07-13 21:18:28.089003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:14.250 [2024-07-13 21:18:28.089015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.250 [2024-07-13 21:18:28.107332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.250 [2024-07-13 21:18:28.107381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:14.250 [2024-07-13 21:18:28.107407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.288 ms 00:27:14.250 [2024-07-13 21:18:28.107418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.250 [2024-07-13 21:18:28.117660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.250 [2024-07-13 21:18:28.117710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:14.250 [2024-07-13 21:18:28.117724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:27:14.250 [2024-07-13 21:18:28.117734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.510 [2024-07-13 21:18:28.178925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.510 [2024-07-13 21:18:28.178997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:14.510 [2024-07-13 21:18:28.179014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 61.129 ms 00:27:14.510 [2024-07-13 21:18:28.179024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.510 [2024-07-13 21:18:28.179126] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:14.510 [2024-07-13 21:18:28.179172] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:14.510 [2024-07-13 21:18:28.179209] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:14.510 [2024-07-13 21:18:28.179295] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:14.510 [2024-07-13 21:18:28.179307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.510 [2024-07-13 21:18:28.179318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:14.510 [2024-07-13 21:18:28.179329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:27:14.510 [2024-07-13 21:18:28.179343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.510 [2024-07-13 21:18:28.179430] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:14.510 [2024-07-13 21:18:28.179452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.510 [2024-07-13 21:18:28.179463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:14.510 [2024-07-13 21:18:28.179474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:14.510 [2024-07-13 21:18:28.179484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.510 [2024-07-13 21:18:28.195512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.510 [2024-07-13 21:18:28.195568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:14.510 [2024-07-13 21:18:28.195599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.988 ms 00:27:14.510 [2024-07-13 21:18:28.195610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.510 [2024-07-13 21:18:28.205245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.510 [2024-07-13 21:18:28.205294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:14.510 [2024-07-13 21:18:28.205324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:14.510 [2024-07-13 21:18:28.205334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.510 [2024-07-13 21:18:28.205394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.510 [2024-07-13 21:18:28.205409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:14.510 [2024-07-13 21:18:28.205425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:14.510 [2024-07-13 21:18:28.205435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.510 [2024-07-13 21:18:28.205601] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:15.079 [2024-07-13 21:18:28.756180] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:15.079 [2024-07-13 21:18:28.756404] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:15.647 [2024-07-13 21:18:29.314278] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:15.647 [2024-07-13 21:18:29.314439] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:15.647 [2024-07-13 21:18:29.314459] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:15.647 [2024-07-13 21:18:29.314474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.647 [2024-07-13 21:18:29.314486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:15.647 [2024-07-13 21:18:29.314501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1109.021 ms 00:27:15.647 [2024-07-13 21:18:29.314540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.647 [2024-07-13 21:18:29.314626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.647 [2024-07-13 21:18:29.314639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:15.647 [2024-07-13 21:18:29.314663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:15.647 [2024-07-13 21:18:29.314673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.647 [2024-07-13 21:18:29.324623] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:15.647 [2024-07-13 21:18:29.324808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.647 [2024-07-13 21:18:29.324826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:15.647 [2024-07-13 21:18:29.324838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.115 ms 00:27:15.647 [2024-07-13 21:18:29.324849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.647 [2024-07-13 21:18:29.325593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.647 [2024-07-13 21:18:29.325625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:15.647 [2024-07-13 21:18:29.325639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.617 ms 00:27:15.647 [2024-07-13 21:18:29.325654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.647 [2024-07-13 21:18:29.327841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.647 [2024-07-13 21:18:29.327871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:15.647 [2024-07-13 21:18:29.327899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.149 ms 00:27:15.647 [2024-07-13 21:18:29.327908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.647 [2024-07-13 21:18:29.351885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.647 [2024-07-13 21:18:29.351920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:15.647 [2024-07-13 21:18:29.351954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.950 ms 00:27:15.647 [2024-07-13 21:18:29.351965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.647 [2024-07-13 21:18:29.352066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.647 [2024-07-13 21:18:29.352085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:15.647 [2024-07-13 21:18:29.352111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:15.647 [2024-07-13 21:18:29.352136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.647 [2024-07-13 21:18:29.353751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.647 [2024-07-13 21:18:29.353782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:15.647 [2024-07-13 21:18:29.353810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.592 ms 00:27:15.647 [2024-07-13 21:18:29.353823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.647 [2024-07-13 21:18:29.353868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.647 [2024-07-13 21:18:29.353883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:15.647 [2024-07-13 21:18:29.353894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:15.647 [2024-07-13 21:18:29.353903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.647 [2024-07-13 21:18:29.353956] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:15.647 [2024-07-13 21:18:29.353970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.647 [2024-07-13 21:18:29.353979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:15.647 [2024-07-13 21:18:29.354021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:15.647 [2024-07-13 21:18:29.354030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.647 [2024-07-13 21:18:29.354093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.647 [2024-07-13 21:18:29.354106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:15.647 [2024-07-13 21:18:29.354117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:15.647 [2024-07-13 21:18:29.354126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.647 [2024-07-13 21:18:29.355363] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1365.882 ms, result 0 00:27:15.647 [2024-07-13 21:18:29.368325] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.647 [2024-07-13 21:18:29.384333] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:15.647 [2024-07-13 21:18:29.392488] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:16.215 21:18:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:16.215 21:18:29 -- common/autotest_common.sh@852 -- # return 0 00:27:16.215 21:18:29 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:16.215 21:18:29 -- ftl/common.sh@95 -- # return 0 00:27:16.215 21:18:29 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:16.215 21:18:29 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:16.215 21:18:29 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:16.215 21:18:29 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:16.215 Validate MD5 checksum, iteration 1 00:27:16.215 21:18:29 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:16.215 21:18:29 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:16.215 21:18:29 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:16.215 21:18:29 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:16.215 21:18:29 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:16.215 21:18:29 -- ftl/common.sh@154 -- # return 0 00:27:16.215 21:18:29 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:16.215 [2024-07-13 21:18:29.985738] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:16.215 [2024-07-13 21:18:29.985941] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79007 ] 00:27:16.473 [2024-07-13 21:18:30.150601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.473 [2024-07-13 21:18:30.294119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.502  Copying: 514/1024 [MB] (514 MBps) Copying: 992/1024 [MB] (478 MBps) Copying: 1024/1024 [MB] (average 495 MBps) 00:27:21.502 00:27:21.502 21:18:35 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:21.502 21:18:35 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:23.406 21:18:37 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:23.406 Validate MD5 checksum, iteration 2 00:27:23.406 21:18:37 -- ftl/upgrade_shutdown.sh@103 -- # sum=5fcfd487b2c66c54b3df2161cd035dc2 00:27:23.406 21:18:37 -- ftl/upgrade_shutdown.sh@105 -- # [[ 5fcfd487b2c66c54b3df2161cd035dc2 != \5\f\c\f\d\4\8\7\b\2\c\6\6\c\5\4\b\3\d\f\2\1\6\1\c\d\0\3\5\d\c\2 ]] 00:27:23.406 21:18:37 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:23.406 21:18:37 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:23.406 21:18:37 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:23.406 21:18:37 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:23.406 21:18:37 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:23.406 21:18:37 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:23.406 21:18:37 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:23.406 21:18:37 -- ftl/common.sh@154 -- # return 0 00:27:23.406 21:18:37 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:23.406 [2024-07-13 21:18:37.280176] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:23.406 [2024-07-13 21:18:37.280339] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79085 ] 00:27:23.665 [2024-07-13 21:18:37.442210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.665 [2024-07-13 21:18:37.585032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.882  Copying: 483/1024 [MB] (483 MBps) Copying: 992/1024 [MB] (509 MBps) Copying: 1024/1024 [MB] (average 496 MBps) 00:27:27.882 00:27:27.882 21:18:41 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:27.882 21:18:41 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:29.784 21:18:43 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:29.784 21:18:43 -- ftl/upgrade_shutdown.sh@103 -- # sum=67b0f8bee4d61c57b4f23b8bfb7c3123 00:27:29.784 21:18:43 -- ftl/upgrade_shutdown.sh@105 -- # [[ 67b0f8bee4d61c57b4f23b8bfb7c3123 != \6\7\b\0\f\8\b\e\e\4\d\6\1\c\5\7\b\4\f\2\3\b\8\b\f\b\7\c\3\1\2\3 ]] 00:27:29.784 21:18:43 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:29.784 21:18:43 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:29.784 21:18:43 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:29.784 21:18:43 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:29.784 21:18:43 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:29.784 21:18:43 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:29.784 21:18:43 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:29.784 21:18:43 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:29.784 21:18:43 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:29.784 21:18:43 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:29.784 21:18:43 -- ftl/common.sh@130 -- # [[ -n 78962 ]] 00:27:29.784 21:18:43 -- ftl/common.sh@131 -- # killprocess 78962 00:27:29.784 21:18:43 -- common/autotest_common.sh@926 -- # '[' -z 78962 ']' 00:27:29.784 21:18:43 -- common/autotest_common.sh@930 -- # kill -0 78962 00:27:29.784 21:18:43 -- common/autotest_common.sh@931 -- # uname 00:27:29.784 21:18:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:29.784 21:18:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78962 00:27:29.784 killing process with pid 78962 00:27:29.784 21:18:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:29.784 21:18:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:29.784 21:18:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78962' 00:27:29.784 21:18:43 -- common/autotest_common.sh@945 -- # kill 78962 00:27:29.784 21:18:43 -- common/autotest_common.sh@950 -- # wait 78962 00:27:30.353 [2024-07-13 21:18:44.266599] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:30.613 [2024-07-13 21:18:44.282285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.613 [2024-07-13 21:18:44.282325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:30.613 [2024-07-13 21:18:44.282357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:30.613 [2024-07-13 21:18:44.282372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.613 [2024-07-13 21:18:44.282399] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:30.613 [2024-07-13 21:18:44.285114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.613 [2024-07-13 21:18:44.285140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:30.613 [2024-07-13 21:18:44.285167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.698 ms 00:27:30.613 [2024-07-13 21:18:44.285177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.613 [2024-07-13 21:18:44.285391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.613 [2024-07-13 21:18:44.285436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:30.613 [2024-07-13 21:18:44.285448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:27:30.613 [2024-07-13 21:18:44.285458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.613 [2024-07-13 21:18:44.286676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.613 [2024-07-13 21:18:44.286731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:30.613 [2024-07-13 21:18:44.286746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.200 ms 00:27:30.613 [2024-07-13 21:18:44.286756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.613 [2024-07-13 21:18:44.287973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.613 [2024-07-13 21:18:44.288009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:30.613 [2024-07-13 21:18:44.288038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.180 ms 00:27:30.613 [2024-07-13 21:18:44.288047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.613 [2024-07-13 21:18:44.298272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.613 [2024-07-13 21:18:44.298306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:30.613 [2024-07-13 21:18:44.298336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.169 ms 00:27:30.613 [2024-07-13 21:18:44.298346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.613 [2024-07-13 21:18:44.303946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.613 [2024-07-13 21:18:44.303979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:30.613 [2024-07-13 21:18:44.304008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.565 ms 00:27:30.613 [2024-07-13 21:18:44.304018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.613 [2024-07-13 21:18:44.304088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.613 [2024-07-13 21:18:44.304110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:30.613 [2024-07-13 21:18:44.304120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:30.613 [2024-07-13 21:18:44.304130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.613 [2024-07-13 21:18:44.314320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.613 [2024-07-13 21:18:44.314368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:30.613 [2024-07-13 21:18:44.314398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.171 ms 00:27:30.613 [2024-07-13 21:18:44.314407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.613 [2024-07-13 21:18:44.325582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.613 [2024-07-13 21:18:44.325630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:30.613 [2024-07-13 21:18:44.325659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.140 ms 00:27:30.613 [2024-07-13 21:18:44.325668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.613 [2024-07-13 21:18:44.336858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.613 [2024-07-13 21:18:44.336916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:30.613 [2024-07-13 21:18:44.336947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.153 ms 00:27:30.613 [2024-07-13 21:18:44.336958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.613 [2024-07-13 21:18:44.347528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.613 [2024-07-13 21:18:44.347575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:30.613 [2024-07-13 21:18:44.347605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.504 ms 00:27:30.613 [2024-07-13 21:18:44.347614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.613 [2024-07-13 21:18:44.347648] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:30.613 [2024-07-13 21:18:44.347668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:30.613 [2024-07-13 21:18:44.347680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:30.613 [2024-07-13 21:18:44.347690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:30.613 [2024-07-13 21:18:44.347700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:30.613 [2024-07-13 21:18:44.347912] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:30.613 [2024-07-13 21:18:44.347937] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5ad24337-5379-4549-8ed3-22af24313bce 00:27:30.613 [2024-07-13 21:18:44.347949] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:30.613 [2024-07-13 21:18:44.347959] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:30.613 [2024-07-13 21:18:44.347969] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:30.613 [2024-07-13 21:18:44.347984] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:30.613 [2024-07-13 21:18:44.347994] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:30.614 [2024-07-13 21:18:44.348005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:30.614 [2024-07-13 21:18:44.348015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:30.614 [2024-07-13 21:18:44.348024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:30.614 [2024-07-13 21:18:44.348033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:30.614 [2024-07-13 21:18:44.348044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.614 [2024-07-13 21:18:44.348055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:30.614 [2024-07-13 21:18:44.348068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.397 ms 00:27:30.614 [2024-07-13 21:18:44.348078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.363644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.614 [2024-07-13 21:18:44.363714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:30.614 [2024-07-13 21:18:44.363729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.542 ms 00:27:30.614 [2024-07-13 21:18:44.363740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.364023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.614 [2024-07-13 21:18:44.364044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:30.614 [2024-07-13 21:18:44.364057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:27:30.614 [2024-07-13 21:18:44.364068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.417028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.614 [2024-07-13 21:18:44.417114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:30.614 [2024-07-13 21:18:44.417145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.614 [2024-07-13 21:18:44.417170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.417215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.614 [2024-07-13 21:18:44.417227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:30.614 [2024-07-13 21:18:44.417237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.614 [2024-07-13 21:18:44.417246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.417350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.614 [2024-07-13 21:18:44.417366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:30.614 [2024-07-13 21:18:44.417398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.614 [2024-07-13 21:18:44.417424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.417447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.614 [2024-07-13 21:18:44.417458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:30.614 [2024-07-13 21:18:44.417469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.614 [2024-07-13 21:18:44.417479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.498387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.614 [2024-07-13 21:18:44.498460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:30.614 [2024-07-13 21:18:44.498492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.614 [2024-07-13 21:18:44.498502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.529662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.614 [2024-07-13 21:18:44.529712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:30.614 [2024-07-13 21:18:44.529743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.614 [2024-07-13 21:18:44.529754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.529828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.614 [2024-07-13 21:18:44.529859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:30.614 [2024-07-13 21:18:44.529905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.614 [2024-07-13 21:18:44.529924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.529979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.614 [2024-07-13 21:18:44.529993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:30.614 [2024-07-13 21:18:44.530003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.614 [2024-07-13 21:18:44.530029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.530150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.614 [2024-07-13 21:18:44.530167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:30.614 [2024-07-13 21:18:44.530178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.614 [2024-07-13 21:18:44.530188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.530239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.614 [2024-07-13 21:18:44.530257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:30.614 [2024-07-13 21:18:44.530268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.614 [2024-07-13 21:18:44.530278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.530320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.614 [2024-07-13 21:18:44.530332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:30.614 [2024-07-13 21:18:44.530343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.614 [2024-07-13 21:18:44.530352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.530407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.614 [2024-07-13 21:18:44.530422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:30.614 [2024-07-13 21:18:44.530433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.614 [2024-07-13 21:18:44.530443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.614 [2024-07-13 21:18:44.530582] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 248.258 ms, result 0 00:27:31.551 21:18:45 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:31.551 21:18:45 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:31.551 21:18:45 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:31.551 21:18:45 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:31.551 21:18:45 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:31.551 21:18:45 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:31.551 Remove shared memory files 00:27:31.551 21:18:45 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:31.551 21:18:45 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:31.551 21:18:45 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:31.551 21:18:45 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:31.551 21:18:45 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78794 00:27:31.551 21:18:45 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:31.551 21:18:45 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:31.551 00:27:31.551 real 1m25.052s 00:27:31.551 user 2m1.964s 00:27:31.551 sys 0m20.789s 00:27:31.551 21:18:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.551 21:18:45 -- common/autotest_common.sh@10 -- # set +x 00:27:31.552 ************************************ 00:27:31.552 END TEST ftl_upgrade_shutdown 00:27:31.552 ************************************ 00:27:31.811 21:18:45 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:27:31.811 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:27:31.811 21:18:45 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:27:31.811 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:27:31.811 21:18:45 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:31.811 21:18:45 -- ftl/ftl.sh@14 -- # killprocess 71134 00:27:31.811 21:18:45 -- common/autotest_common.sh@926 -- # '[' -z 71134 ']' 00:27:31.811 21:18:45 -- common/autotest_common.sh@930 -- # kill -0 71134 00:27:31.811 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (71134) - No such process 00:27:31.811 Process with pid 71134 is not found 00:27:31.811 21:18:45 -- common/autotest_common.sh@953 -- # echo 'Process with pid 71134 is not found' 00:27:31.811 21:18:45 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:27:31.811 21:18:45 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79210 00:27:31.811 21:18:45 -- ftl/ftl.sh@20 -- # waitforlisten 79210 00:27:31.811 21:18:45 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:31.811 21:18:45 -- common/autotest_common.sh@819 -- # '[' -z 79210 ']' 00:27:31.811 21:18:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.811 21:18:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:31.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.811 21:18:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.811 21:18:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:31.811 21:18:45 -- common/autotest_common.sh@10 -- # set +x 00:27:31.811 [2024-07-13 21:18:45.614579] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:31.811 [2024-07-13 21:18:45.614743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79210 ] 00:27:32.070 [2024-07-13 21:18:45.779729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.071 [2024-07-13 21:18:45.924040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:32.071 [2024-07-13 21:18:45.924288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.639 21:18:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:32.639 21:18:46 -- common/autotest_common.sh@852 -- # return 0 00:27:32.639 21:18:46 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:27:32.898 nvme0n1 00:27:33.157 21:18:46 -- ftl/ftl.sh@22 -- # clear_lvols 00:27:33.158 21:18:46 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:33.158 21:18:46 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:33.158 21:18:47 -- ftl/common.sh@28 -- # stores=515e5bcf-86e4-47db-96fc-4c09c3c960a6 00:27:33.158 21:18:47 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:33.158 21:18:47 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 515e5bcf-86e4-47db-96fc-4c09c3c960a6 00:27:33.416 21:18:47 -- ftl/ftl.sh@23 -- # killprocess 79210 00:27:33.416 21:18:47 -- common/autotest_common.sh@926 -- # '[' -z 79210 ']' 00:27:33.416 21:18:47 -- common/autotest_common.sh@930 -- # kill -0 79210 00:27:33.416 21:18:47 -- common/autotest_common.sh@931 -- # uname 00:27:33.416 21:18:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:33.416 21:18:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79210 00:27:33.416 21:18:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:33.416 21:18:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:33.416 killing process with pid 79210 00:27:33.416 21:18:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79210' 00:27:33.416 21:18:47 -- common/autotest_common.sh@945 -- # kill 79210 00:27:33.416 21:18:47 -- common/autotest_common.sh@950 -- # wait 79210 00:27:35.322 21:18:48 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:35.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:35.322 Waiting for block devices as requested 00:27:35.322 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:27:35.581 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:27:35.581 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:35.581 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:40.851 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:27:40.851 21:18:54 -- ftl/ftl.sh@28 -- # remove_shm 00:27:40.851 Remove shared memory files 00:27:40.851 21:18:54 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:40.851 21:18:54 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:40.851 21:18:54 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:40.851 21:18:54 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:40.851 21:18:54 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:40.851 21:18:54 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:40.851 00:27:40.851 real 11m53.172s 00:27:40.851 user 14m44.873s 00:27:40.851 sys 1m22.499s 00:27:40.851 21:18:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.851 ************************************ 00:27:40.851 END TEST ftl 00:27:40.851 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:27:40.851 ************************************ 00:27:40.851 21:18:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:40.851 21:18:54 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:40.851 21:18:54 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:40.851 21:18:54 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:40.851 21:18:54 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:40.851 21:18:54 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:40.851 21:18:54 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:40.851 21:18:54 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:27:40.851 21:18:54 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:27:40.851 21:18:54 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:27:40.851 21:18:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:40.851 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:27:40.851 21:18:54 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:27:40.851 21:18:54 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:27:40.851 21:18:54 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:27:40.851 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:27:42.229 INFO: APP EXITING 00:27:42.229 INFO: killing all VMs 00:27:42.229 INFO: killing vhost app 00:27:42.229 INFO: EXIT DONE 00:27:43.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:43.166 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:27:43.166 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:27:43.166 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:43.166 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:43.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:43.993 Cleaning 00:27:43.993 Removing: /var/run/dpdk/spdk0/config 00:27:43.993 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:43.993 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:43.993 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:43.993 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:43.993 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:43.993 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:43.993 Removing: /var/run/dpdk/spdk0 00:27:43.993 Removing: /var/run/dpdk/spdk_pid56192 00:27:43.993 Removing: /var/run/dpdk/spdk_pid56396 00:27:43.993 Removing: /var/run/dpdk/spdk_pid56690 00:27:43.993 Removing: /var/run/dpdk/spdk_pid56793 00:27:43.994 Removing: /var/run/dpdk/spdk_pid56893 00:27:43.994 Removing: /var/run/dpdk/spdk_pid57003 00:27:43.994 Removing: /var/run/dpdk/spdk_pid57104 00:27:43.994 Removing: /var/run/dpdk/spdk_pid57143 00:27:43.994 Removing: /var/run/dpdk/spdk_pid57180 00:27:43.994 Removing: /var/run/dpdk/spdk_pid57247 00:27:43.994 Removing: /var/run/dpdk/spdk_pid57353 00:27:43.994 Removing: /var/run/dpdk/spdk_pid57791 00:27:43.994 Removing: /var/run/dpdk/spdk_pid57855 00:27:43.994 Removing: /var/run/dpdk/spdk_pid57931 00:27:43.994 Removing: /var/run/dpdk/spdk_pid57955 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58084 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58113 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58243 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58272 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58331 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58363 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58424 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58450 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58627 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58669 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58749 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58821 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58858 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58930 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58956 00:27:43.994 Removing: /var/run/dpdk/spdk_pid58997 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59029 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59070 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59096 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59141 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59168 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59215 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59241 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59282 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59308 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59354 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59381 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59422 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59453 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59500 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59526 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59567 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59593 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59634 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59670 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59712 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59738 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59779 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59811 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59852 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59883 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59930 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59956 00:27:43.994 Removing: /var/run/dpdk/spdk_pid59997 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60033 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60075 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60104 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60154 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60188 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60233 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60265 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60306 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60342 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60386 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60467 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60577 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60736 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60839 00:27:43.994 Removing: /var/run/dpdk/spdk_pid60881 00:27:43.994 Removing: /var/run/dpdk/spdk_pid61357 00:27:44.252 Removing: /var/run/dpdk/spdk_pid61489 00:27:44.252 Removing: /var/run/dpdk/spdk_pid61598 00:27:44.252 Removing: /var/run/dpdk/spdk_pid61651 00:27:44.252 Removing: /var/run/dpdk/spdk_pid61682 00:27:44.252 Removing: /var/run/dpdk/spdk_pid61757 00:27:44.252 Removing: /var/run/dpdk/spdk_pid62419 00:27:44.252 Removing: /var/run/dpdk/spdk_pid62461 00:27:44.252 Removing: /var/run/dpdk/spdk_pid62977 00:27:44.252 Removing: /var/run/dpdk/spdk_pid63086 00:27:44.252 Removing: /var/run/dpdk/spdk_pid63200 00:27:44.252 Removing: /var/run/dpdk/spdk_pid63249 00:27:44.252 Removing: /var/run/dpdk/spdk_pid63280 00:27:44.252 Removing: /var/run/dpdk/spdk_pid63311 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65240 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65390 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65394 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65406 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65459 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65463 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65480 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65525 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65529 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65541 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65591 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65595 00:27:44.252 Removing: /var/run/dpdk/spdk_pid65607 00:27:44.252 Removing: /var/run/dpdk/spdk_pid67006 00:27:44.252 Removing: /var/run/dpdk/spdk_pid67118 00:27:44.252 Removing: /var/run/dpdk/spdk_pid67262 00:27:44.252 Removing: /var/run/dpdk/spdk_pid67389 00:27:44.252 Removing: /var/run/dpdk/spdk_pid67504 00:27:44.252 Removing: /var/run/dpdk/spdk_pid67619 00:27:44.252 Removing: /var/run/dpdk/spdk_pid67756 00:27:44.252 Removing: /var/run/dpdk/spdk_pid67830 00:27:44.252 Removing: /var/run/dpdk/spdk_pid67969 00:27:44.252 Removing: /var/run/dpdk/spdk_pid68368 00:27:44.252 Removing: /var/run/dpdk/spdk_pid68405 00:27:44.252 Removing: /var/run/dpdk/spdk_pid68865 00:27:44.252 Removing: /var/run/dpdk/spdk_pid69051 00:27:44.252 Removing: /var/run/dpdk/spdk_pid69160 00:27:44.252 Removing: /var/run/dpdk/spdk_pid69267 00:27:44.252 Removing: /var/run/dpdk/spdk_pid69316 00:27:44.252 Removing: /var/run/dpdk/spdk_pid69347 00:27:44.252 Removing: /var/run/dpdk/spdk_pid69645 00:27:44.252 Removing: /var/run/dpdk/spdk_pid69713 00:27:44.252 Removing: /var/run/dpdk/spdk_pid69795 00:27:44.252 Removing: /var/run/dpdk/spdk_pid70188 00:27:44.252 Removing: /var/run/dpdk/spdk_pid70342 00:27:44.252 Removing: /var/run/dpdk/spdk_pid71134 00:27:44.252 Removing: /var/run/dpdk/spdk_pid71267 00:27:44.252 Removing: /var/run/dpdk/spdk_pid71468 00:27:44.252 Removing: /var/run/dpdk/spdk_pid71566 00:27:44.252 Removing: /var/run/dpdk/spdk_pid71924 00:27:44.252 Removing: /var/run/dpdk/spdk_pid72188 00:27:44.252 Removing: /var/run/dpdk/spdk_pid72542 00:27:44.252 Removing: /var/run/dpdk/spdk_pid72754 00:27:44.252 Removing: /var/run/dpdk/spdk_pid72902 00:27:44.252 Removing: /var/run/dpdk/spdk_pid72972 00:27:44.252 Removing: /var/run/dpdk/spdk_pid73124 00:27:44.252 Removing: /var/run/dpdk/spdk_pid73155 00:27:44.252 Removing: /var/run/dpdk/spdk_pid73226 00:27:44.252 Removing: /var/run/dpdk/spdk_pid73436 00:27:44.252 Removing: /var/run/dpdk/spdk_pid73691 00:27:44.252 Removing: /var/run/dpdk/spdk_pid74164 00:27:44.252 Removing: /var/run/dpdk/spdk_pid74667 00:27:44.252 Removing: /var/run/dpdk/spdk_pid75139 00:27:44.252 Removing: /var/run/dpdk/spdk_pid75687 00:27:44.252 Removing: /var/run/dpdk/spdk_pid75830 00:27:44.252 Removing: /var/run/dpdk/spdk_pid75915 00:27:44.252 Removing: /var/run/dpdk/spdk_pid76634 00:27:44.252 Removing: /var/run/dpdk/spdk_pid76698 00:27:44.252 Removing: /var/run/dpdk/spdk_pid77209 00:27:44.252 Removing: /var/run/dpdk/spdk_pid77637 00:27:44.252 Removing: /var/run/dpdk/spdk_pid78191 00:27:44.252 Removing: /var/run/dpdk/spdk_pid78321 00:27:44.252 Removing: /var/run/dpdk/spdk_pid78379 00:27:44.252 Removing: /var/run/dpdk/spdk_pid78446 00:27:44.252 Removing: /var/run/dpdk/spdk_pid78513 00:27:44.252 Removing: /var/run/dpdk/spdk_pid78578 00:27:44.252 Removing: /var/run/dpdk/spdk_pid78794 00:27:44.252 Removing: /var/run/dpdk/spdk_pid78833 00:27:44.252 Removing: /var/run/dpdk/spdk_pid78899 00:27:44.252 Removing: /var/run/dpdk/spdk_pid78962 00:27:44.252 Removing: /var/run/dpdk/spdk_pid79007 00:27:44.511 Removing: /var/run/dpdk/spdk_pid79085 00:27:44.511 Removing: /var/run/dpdk/spdk_pid79210 00:27:44.511 Clean 00:27:44.511 killing process with pid 48327 00:27:44.511 killing process with pid 48331 00:27:44.511 21:18:58 -- common/autotest_common.sh@1436 -- # return 0 00:27:44.511 21:18:58 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:27:44.511 21:18:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:44.511 21:18:58 -- common/autotest_common.sh@10 -- # set +x 00:27:44.511 21:18:58 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:27:44.511 21:18:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:44.511 21:18:58 -- common/autotest_common.sh@10 -- # set +x 00:27:44.511 21:18:58 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:44.511 21:18:58 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:44.511 21:18:58 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:44.511 21:18:58 -- spdk/autotest.sh@394 -- # hash lcov 00:27:44.511 21:18:58 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:44.511 21:18:58 -- spdk/autotest.sh@396 -- # hostname 00:27:44.511 21:18:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:44.770 geninfo: WARNING: invalid characters removed from testname! 00:28:06.692 21:19:18 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:07.629 21:19:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:10.164 21:19:23 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:12.105 21:19:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:14.639 21:19:28 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:16.545 21:19:30 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:18.448 21:19:32 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:18.448 21:19:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:18.448 21:19:32 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:18.448 21:19:32 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.448 21:19:32 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.448 21:19:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.448 21:19:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.448 21:19:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.448 21:19:32 -- paths/export.sh@5 -- $ export PATH 00:28:18.448 21:19:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.448 21:19:32 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:18.448 21:19:32 -- common/autobuild_common.sh@435 -- $ date +%s 00:28:18.448 21:19:32 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720905572.XXXXXX 00:28:18.448 21:19:32 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720905572.YEnQvQ 00:28:18.448 21:19:32 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:28:18.448 21:19:32 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:28:18.448 21:19:32 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:18.448 21:19:32 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:18.448 21:19:32 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:18.448 21:19:32 -- common/autobuild_common.sh@451 -- $ get_config_params 00:28:18.448 21:19:32 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:28:18.448 21:19:32 -- common/autotest_common.sh@10 -- $ set +x 00:28:18.707 21:19:32 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:18.707 21:19:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:18.707 21:19:32 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:18.707 21:19:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:18.707 21:19:32 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:18.707 21:19:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:18.707 21:19:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:18.707 21:19:32 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:18.707 21:19:32 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:18.707 21:19:32 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:18.707 21:19:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:18.707 + [[ -n 5157 ]] 00:28:18.707 + sudo kill 5157 00:28:18.717 [Pipeline] } 00:28:18.736 [Pipeline] // timeout 00:28:18.741 [Pipeline] } 00:28:18.758 [Pipeline] // stage 00:28:18.764 [Pipeline] } 00:28:18.781 [Pipeline] // catchError 00:28:18.790 [Pipeline] stage 00:28:18.793 [Pipeline] { (Stop VM) 00:28:18.806 [Pipeline] sh 00:28:19.086 + vagrant halt 00:28:22.372 ==> default: Halting domain... 00:28:28.952 [Pipeline] sh 00:28:29.232 + vagrant destroy -f 00:28:31.774 ==> default: Removing domain... 00:28:32.354 [Pipeline] sh 00:28:32.635 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:28:32.644 [Pipeline] } 00:28:32.663 [Pipeline] // stage 00:28:32.669 [Pipeline] } 00:28:32.688 [Pipeline] // dir 00:28:32.694 [Pipeline] } 00:28:32.712 [Pipeline] // wrap 00:28:32.719 [Pipeline] } 00:28:32.735 [Pipeline] // catchError 00:28:32.743 [Pipeline] stage 00:28:32.745 [Pipeline] { (Epilogue) 00:28:32.758 [Pipeline] sh 00:28:33.037 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:38.316 [Pipeline] catchError 00:28:38.319 [Pipeline] { 00:28:38.334 [Pipeline] sh 00:28:38.614 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:38.615 Artifacts sizes are good 00:28:38.623 [Pipeline] } 00:28:38.639 [Pipeline] // catchError 00:28:38.650 [Pipeline] archiveArtifacts 00:28:38.656 Archiving artifacts 00:28:38.797 [Pipeline] cleanWs 00:28:38.810 [WS-CLEANUP] Deleting project workspace... 00:28:38.810 [WS-CLEANUP] Deferred wipeout is used... 00:28:38.836 [WS-CLEANUP] done 00:28:38.837 [Pipeline] } 00:28:38.854 [Pipeline] // stage 00:28:38.859 [Pipeline] } 00:28:38.874 [Pipeline] // node 00:28:38.879 [Pipeline] End of Pipeline 00:28:38.910 Finished: SUCCESS